Many people have moved to facial recognition to unlock their iPhone, mostly because it is easy.
Researchers wanted to know how secure that is.
For those people who use their face to authorize payments, the problem is, maybe, a bit more serious.
Researchers at Tel Aviv University harnessed deep fakes and that magic word, AI, to figure out what three of the leading facial recognition software packages are looking for.
Then they created a deep fake to look like that.
They created less than a dozen of these deep fake images – nine to be exact.
Then they tested these nine fake images against a publicly available database of faces called Labeled Faces in the Wild.
Those nine computer generated faces were considered a match for 40 to 60% of the faces in that database, depending on which software package was being tested.
NINE matched over 13,000.
While this was a research project and some of the systems could be programmed to reject the flat images, all that means is that the researchers would need to create 3D versions of those nine. Not a high bar to meet.
Researchers say that with more test data they could do even better.
Does this mean that facial device verification is useless?
No, it doesn’t. What it means is that it is a relatively low security authentication mechanism.
Each person needs to decide what an appropriate level of risk/security is for them.
Likely, for most consumers, facial recognition is probably sufficient.
Remember that facial recognition is different than iris or retina scans. They use completely different technologies, are much more expensive and complex and are highly secure.
We have seen similar problems with consumer-grade fingerprint scans.
All of these vendors have to deal with how long a consumer is willing to wait for his or her device to unlock and how many false “failures” that consumer is willing to tolerate.