Apple’s iPhone X is one of several technologies bringing facial biometrics into the mainstream. It seems to have everything bar a heat scanner; the TrueDepth camera projects an impressive-sounding 30,000 infrared dots on to your phiz, scanning every blackhead in minute 3D detail.
The company claims some impressive figures, and it isn’t the only one touting facial recognition as a mainstream solution. Others include Microsoft, with Windows Hello , and Google, with the Trusted Face technology it released in Android Lollipop. Just how secure are these technologies, and should we rely on them?
There are two metrics that matter when discussing facial recognition systems. The first of these is the false acceptance rate (FAR), which describes how often a device matches the wrong face to the face it has on record. Its converse is the false rejection rate (FRR), which is how often if fails to recognise the correct face.
Matt Lewis, technical research director at security firm NCC Group, has spent a lot of time trying to fool facial recognition systems. He explains that an increase in one error rate decreases the other. The place where they intersect is called the Equal Error Rate.
The FRR might cause some inconvenience if it stopped you logging into your phone or workstation, or prevented you from getting into a building. But a false acceptance could be catastrophic if it permitted access by the wrong person. Perhaps that’s why facial recognition analysts and vendors tend to talk about accuracy primarily in terms of the FAR.
For fanbois only? Face ID is turning punters off picking up an iPhone X
Lewis categorises three levels of security based on the FAR. A 1:100 FAR would be described as low security (you’d only have to pass a phone around to 100 people and have them scan their faces for one of them to successfully log in). Medium security would be 1:10,000 users, while 1:1,000,000 would pass his high-security threshold.
The iPhone X seemed to suffer from a false rejection event at its first public demo, when the first attempt to unlock it didn’t work. Apple later blamed this apparent false rejection on the device doing exactly what it was supposed to. The iPhone X requires a passcode after five unsuccessful Face ID authentication attempts, and various people backstage had been messing about trying to authenticate with it, the firm said.
As for the FAR, Apple’s security guide on Face ID claims a 1:1,000,000 FAR, making it a high-security device, according to Lewis’s metrics, and about twice as accurate on the FAR side as its Touch ID system.
One in a million is the average FAR, but what happens when people deliberately try to fool the system by copying someone’s face and then using it to trigger a false acceptance?
There have been successful attempts to trigger false acceptances on facial recognition systems in the past. Lewis should know, because he engineered one of them.
He used three images of his face – front and both sides – taken on an iPhone 5s to produce a 3D image of his mug, and from there, a $299 full-colour resin mask. He then waved it at both Android Trusted Face and Windows Hello.
Trusted Face is apparently too trusting, because it happily authenticated him. This didn’t surprise him because Google’s guidance says that its facial recognition isn’t as secure as a PIN (why use it then?). Windows Hello was more surprising because the system uses an infra-red camera for more accurate facial scanning, and machine learning to refine its understanding of what you look like.
He worked with Microsoft to get to the bottom of this. Redmond decided that it had been too liberal in choosing samples that helped its facial recognition algorithm learn more about a user’s face. After using repeated facial scans to get better at recognising you, its algorithm effectively got too lax, looking at a Matt-like mask and effectively saying: “Oh, you’ll do.”
Microsoft has since tightened up its approach, and later versions of the algorithm don’t suffer from the same problem, said Lewis’s white paper.
For every successful false acceptance attack on a facial recognition system, designers will come up with an enhancement to the recognition algorithm that thwarts it. You’re trying to use a photo to spoof a system? Fine, we’ll create a system that scans your face in 3D. You’re using a mask? OK, here’s a liveness detector that looks for motion and blinking.
Then researchers will typically come back with a counter-hack. For example, researchers at the University of North Carolina developed an attack (PDF) that modelled colour 3D representations of faces from social media photos in virtual reality that could then be animated.
“The implication was that such spoofing attacks on existing systems could be performed simply by exploiting a user’s public persona, rather than hacking the authentication software (in code or in credential files), itself,” UNC researcher True Price told us.
Your shoe, chewing gum, or ciggies are now your extra password
There have been other attacks on facial recognition systems. For example, researchers at CMU (PDF) successfully triggered false acceptance and rejection on some systems by printing out eyeglasses with different visual characteristics.
Vulnerable to triplets
So how does Apple’s iPhone X hold up? We’re not on Apple’sfriends list when it comes to getting review products, but The Wall Street Journal apparently is. They got fondling privileges and tested it in four scenarios: everyday use, using a photograph, using a mask, and using both fraternal and identical twins or triplets. The bad news: identical triplet kids fooled the system (but then Apple explicitly says that the probability of a match for twins is “different” in its security guide, and suggests using a passcode). The good news: in all other scenarios, including masks, Face ID did what was intended. Apparently those 30,000 infra-red dots really do mean something.
So, it’s game over for attackers who aren’t identical siblings, then? Don’t be daft. Security never was and never will be a zero-sum game. It’s a question of quantifiable risk, but the odds are shifting in the defenders’ favour.
“We have come far enough to make spoofing difficult but not impossible,” Lewis says. Not only are the cameras and learning algorithms getting better, but most of the facial recognition is embedded in the endpoint, meaning that you’d have to get physical access to it rather than phish your way into someone’s cloud account, for example. “The risk is going to drop much lower naturally by virtue of how we typically use facial recognition within end-user devices as well.”
Does that mean that facial recognition is driving down the cybersecurity poverty line, enabling more people to get high-security protection as a baseline? And if so, shouldn’t we all rush out and use it?
There’s one big argument against, according to 451 Research analyst Garrett Bekker. “Compromise,” he says. If someone does compromise a facial recognition system – either by stealing the biometric information created during enrolment or by finding a way to fool the system – then you’re stuffed. They have something that you can’t change.
It’s a constant worry, argues Lewis. “Biometrics are always at risk of copying because they’re not secret. That aspect will never go away,” he says.
You might be able to pilfer naked celebrity pics from iCloud, but you won’t be stealing face data from there. The iPhone X stores the biometric data taken during enrolment locally on a secure enclave – effectively Apple’s version of the trusted platform module – and it doesn’t leave the phone.
The prospects get far worse when people do start storing biometric data centrally, warns Merritt Maxim, principal analyst serving security and risk professionals at Forrester Research.
“We’ve already seen some examples of that in the US government OPM breach,” Maxim adds. Some of the stolen data was said to have included fingerprint data used as part of background checks.
This raises some legal questions around storing biometric data for public and private sector organisations alike.
“Under the GDPR [European Union’s General Data Protection Regulation] that’s coming into force next year, there are specific provisions in there around biometric data and the storage and capture of that,” says Lewis. “There are going to be a lot of systems that fall foul of that regulation.” If you store and subsequently lose someone’s biometric face data, the fines could be significant.
First iPhone X fondlers struggle to admit that Face ID sort of sucks
So how can you prevent a game-over scenario if your face data goes walkies? There are answers. They just might not be the answers that the typical consumer is looking for.
“The only real solutions there are to use multi-factor authentication, so you have to use your face and a PIN and a token to get a stronger binding of the individual,” Lewis says. But that’s a step backwards, and detracts from the convenience that consumer-facing authentication tech is looking for.
There have been some attempts to handle that. One concept, cancellable biometrics , effectively distorts the biometric image in a repeatable way. If the biometric image is compromised, the authenticating party can change the distortion process, invalidating the stored biometric date and reissuing a new version. This all seems largely academic so far, though.
Facial recognition seems a lot more secure than using a PIN or password, while using others is provably less so. As with any other cybersecurity mechanism defence in depth is the best approach and in this scenario, two authentication methods in unison will be more effective than one. Three, more effective than two. As ever, with facial recognition and all biometrics, it’ll be a case of keeping up with keeping ahead of the criminals. ®