Researchers Hack Smartphone Fingerprint Security With AI-Driven Master Print Forgery

AI can be a good thing or a bad thing, depending on what the AI is designed to do. Many AIs are meant to do things like help doctors diagnose disease, though other AI technologies can be turned towards more nefarious deeds. For example, a team of IEEE researchers has devised a method of harnessing an AI to generate fake fingerprints that work like a master key, to bypass the biometric locks that are employed on smartphones. The team notes that the attack using their AI-driven method can be issued against random devices "with some probability of success."

One Plus 6T In Display Fingerpriint Reader
OnePlus 6T In-Display Fingerprint Scanner

The technique is known as DeepMasterPrints, and the researchers liken them to a master key for a building that is able to unlock any door. To create the master fingerprint an artificial neural network was used to input fingerprints from over 6,000 people. This team was the first to use machine learning algorithms to develop a working master print, though they were not the first to consider this idea. The process used a generator neural net that analyzed all the inputted fingerprint images, and could then produce its own fingerprints.

Those neural network-generated fingerprints were fed to another neural network used as a discriminator, with that neural network attempting to determine if the fingerprints were genuine or fake. The fingerprints that were labeled as fakes by the discriminator network where sent back to the generator for adjustments to be made to the fingerprint images and then the process was started again. Over thousands of these generation and determination cycles, the generator was eventually able to fool the discriminator with the fake fingerprints.

ai fingerprint

The master fingerprints that were generated were specifically designed to defeat the sort of biometric sensors used broadly across the smartphone market today. The team says smartphone capacitive sensors only take partial readings of fingerprints. when a finger is placed on the sensor. This is by design, to make the sensor more convenient and quick to use. To read the entire fingerprint would force the smartphone user to place their finger on the sensors the same way each and every time, which is just not practical for the specific use case.

For this technique to work, the team used fingerprints to train their neural networks that included sets of rolled fingerprints that were scanned from prints inked on paper, and fingerprint data sets that were captured from capacitive sensors. The AI system the team created was much better at spoofing capacitive fingerprints than spoofing the rolled fingerprints through three security levels. At the lowest security level, the fake prints fooled the biometric sensor 76% of the time. The researchers note that it is unlikely that any fingerprint sensor operates at this low of a level of security.

At the mid security tier, the fake fingerprints fooled the sensor 22% of the time; the team says that this is a realistic security tier for most sensors to operate at. At the highest level of security, the fake prints fooled the sensor less than 1.2% of the time. The security tiers are described as having a false match rate (FMR) of 1% on the lowest level, 0.1% on the middle level, and 0.01% on the highest level. The team says its findings aren't the end for biometric sensors but do show that phone and other device designers need to give more consideration to the trade-off between convenience and security. You can find the study here. It's also worth noting that other biometric security measures have been defeated in the past in other ways. Apple's Face ID, for instance, was beaten with a 3D printed mask.

Update 11/16/18, 11:45am: We reached out to Synaptics, known for their secure fingerprint sensors, about this story and got some interesting feedback. In regards to using Machine Learning / Neural Networks for good security practices, a Synaptics rep said, “…as part of our Quantum matcher we train it to recognize a real finger versus a fake finger using all known spoof materials including conductive ink, laser etched rubber, composi-mold, dragon skin. Any “Master” Fingerprint would have to be transferred to a material to imitate a finger and our products are designed to expressly reject these spoofing materials.”

Using a new, unrecognized material could aid this exploit, but Synaptics had some feedback in that regard as well, “Furthermore, should a new spoof material arise that can defeat our current matcher, we would simply train our Quantum Matcher to learn and distinguish such a new material from a real finger. Then we would provide a secure update to provide immunity. In other words, if this kind of attack ever significantly reduced the security of our solution in the future, our approach to anti spoofing would allow our matcher to be trained and updated to ignore DeepMasterPrints or similar successors.”

synaptics clear id senso
The size of the sensor being used will also affect the success rate of DeepMasterPrints. According to Synaptics, the company has foreseen the possibility of a “Master Fingerprint” attack and suggests a large sensor is fundamentally safer against a “Master Fingerprint” attack. A small sensor detects a much smaller area of the finger, which increases the probability of success for a “Master Fingerprint” attack to work. With a larger sensor, the likelihood of such an attack working is decreased. Synaptics went on to say, “We see some companies using sensor sizes of 12-16 square mm. They choose small sensors mainly for cost, but this has many negative effects including, more enrollments, poorer FRR and of course poorer FAR (which is the focus on this type of attack). In our testing versus a competitor with a 16 square mm sensor, it took 19 enrollments for our competitor to achieve a 6% FRR, with our larger sensor and 9 enrollments, our FRR was <1%.”