US Intelligence Wants Computers That Spot Fake Fingerprints
Researchers at the Intelligence Advanced Research Projects Agency aim not only to spot prosthetic thumbs, it will also learn to predict attacks never seen before.
With Americans increasingly using fingerprint recognition to secure everything from smartphones to U.S. borders, impostors are inventing some pretty creative ways to fake out biometric readers.
Now, the spy community is fighting back. A 4-year project just launched to develop artificial intelligence that should automatically detect spoofed fingertips, facial images and irises.
Researchers at the Intelligence Advanced Research Projects Agency aim not only to spot prosthetic thumbs, images printed with ink that conducts a charge, or other proven deceptions. The machine will learn to predict attacks never seen before.
"One of the ways we achieve that today is with a human in the loop," IARPA program manager Chris Boehnen told Nextgov in an interview. Cops are present when suspects are fingerprinted. At the Otay Mesa pedestrian crossing, U.S. Customs and Border Protection personnel assist with a new system for iris and facial identification of foreign travelers. In January, CBP officers started using mugshot-matching at John F. Kennedy International Airport to reduce passport fraud.
Read more: Contactless Fingerprinting Promises ‘Wave Your Hand, Unlock A Door’
Read more: Five Times More Fingerprints Stolen in OPMHack Than Previously Thought
See also: So That Thumbprint Thing on Your Phone Is Useless Now
"Yes, the human helps improve the security of the process but it's not uncommon for humans to be the weak link in security processes," Boehnen said.
The ideal biometric security systems should not just "find the attacks of today,” but also ruses “we've never even necessarily considered" as human beings, he said. In techie-jargon, this means the technologies will not be signature-based tools that look for patterns of identified attacks. Think of “anomaly detection" machines that look for odd behaviors instead.
Earlier this month, Boehnen met with about 100 people interested in proposing "presentation attack detectors" for a research and development program named after the Norse god, Odin.
One stage of the project—called Thor, the son of Odin who brings peace and justice—will attempt to flag known and unknown scams. A final stage, dubbed Loki, who is a trickster in Norse mythology "known for causing chaos," will try to find weaknesses in the detection systems themselves, according to project slides.
Boehnen mentioned the German hacking research group Chaos Computer Club as an example of the challenges biometric identification faces. In September 2013, the analysts showed how to fool the iPhone fingerprint security system with pink latex milk, less than two days after Apple released the first Touch ID-enabled smartphone.
Foreigners Are Welcome
"Think about the state of computer security in say the mid- to-late- '90s. Everybody at this point had a personal computer for the most part—but how many of them had antivirus?" Boehnen said.
Now, computer users are engaged in a game of cat and mouse with hackers.
"The goal here is to make sure that we don't find ourselves using technology, in this case biometrics, that we can't rely upon,” he said.
The threat of biometric cons came into focus in September 2015, when the U.S. government disclosed that suspected Chinese hackers stole fingerprint data on 5.6 million national security personnel from the Office of Personnel Management.
Despite the fear that a nation state might abuse that very personal data, foreigners are allowed to help with the biometric spoofing-detection project.
At IARPA, "we want to get the best research in the world and while I’m proud to say that that's commonly in the good ole U.S. of A., it may not always be," Boehnen said. So, "we're somewhat unique in the IC, in that we are able to support participation by foreign entities. And as this is an unclassified program we have no problems with participation."
But he stressed that foreigners will not be given any sort of preference over other competitors. Plus, they are barred from Loki, the part of the project where testers will try attacking the impostor-detection systems. That portion is classified.
After the OPM hack, U.S. officials insisted "the ability to misuse fingerprint data is limited," but a federal working group will examine ways adversaries might exploit the stolen finger files.
Boehnen said the ODIN program is not a response to the massive breach. In general, "the goal of this program is to help ensure that when a biometric system believes you are you -- that you really are,” he said.
Hacking is Pretty Cheap Now, But...
Right now, the cost of defeating fingerprint sensors on smartphones can be relatively low.
Michigan State University researchers in February published a paper demonstrating how simple it is to print an image of a fingerprint with enough accuracy to fool fingerprint readers. The technique only requires a standard inkjet printer along with some conductive ink and special paper, which The Atlantic estimates could cost about $450 total.
"Silly putty and wood glue is pretty cheap," Boehnen said, referring to other media that can be used to imitate a finger.
Program funding has not been disclosed. Potential researchers were told to pick detector components that, if purchased in bulk today, would not exceed $5,000 combined.
The price point for poseur-proofing a mobile device is probably lower than for safeguarding biometric access to a national security facility, Boehnen said. But the agency does not want participants spending gobs of money on a machine comprising a mass spectrometer or electron microscope, for instance, he said.
The idea is to capture multiple aspects of a live human body part, such as texture, light and three-dimensional structure, using special algorithms and hardware. Pieces could include, say, a motion detector, camera or an ultrasonic device.
A request for proposals is expected to be released over the next couple of months.
"We picked a number in today's terms that seemed high enough to allow cutting-edge stuff and limit the stuff that's obviously wrong -- but not really prevent the groundbreaking solution which is what we're looking for," Boehnen said.