5 Ways Face Recognition Has Transformed Over the Last 20 Years

face recog.jpg

Roughly 23 years ago I was standing in a dusty vehicle lane in Otay Mesa, CA shielding my eyes from the setting sun as a group of us tried to decipher what we were seeing on the computer screen. It was 1997 and I was a young systems engineer working for Electronic Data Systems (EDS) on a contract for the U.S. Immigration and Naturalization Service (INS). After successfully rolling out the first lane for the Secure Electronic Network for Travelers’ Rapid Inspection (SENTRI) we were now in the midst of a vehicle-based multi-modal biometric test. 

The solution included voice and face biometrics, which at the time were both immature technologies to say the least. I won’t cover the voice “system” here but it relied on a traditional phone handset modified with an embedded computer hanging off of it. That should tell you all you need to know about its readiness, or lack thereof. 

The face recognition system was provided by Identix, which was a traditional fingerprint company who had recently acquired the face recognition company, Visionics.  The placement of the camera in the lane was done in a manner that would try to capture an image of the driver as they stopped at the inspection booth. 

There were a number of problems trying to capture a quality image in the vehicle lane however. This included issues with the windshield, driver pose, hats, sunglasses, sunlight, and the varying heights of vehicles. It was a constant game of adjustments as the vendor tried to accommodate for the conditions.

Perhaps the most entertaining remedy for image wash-out from the sun was when the vendor ran to the nearest hardware store and purchased a section of wood fence. They then set the section of fence in front of the booth in an attempt to try to block the sun from beaming into the driver’s face. This was not successful.  

An excerpt from the 2002 GAO Technology Assessment for Using Biometrics for Border Security.

INS conducted a facial verification test for the Secure Electronic Network for Travelers Rapid Inspection (SENTRI) from November 1997 through July 1998 at California’s Otay Mesa port of entry. The facial verification test involved taking video images of drivers at an inspection booth. The video clips were compared to the SENTRI enrollment database of photographs for all drivers in the SENTRI lane. An Identix system was used for the tests.  

The experiment found that pictures taken in a full frontal enrollment pose showed a significantly higher recognition rate than pictures taken when the head was rotated slightly. It also found a principal identification problem when the image was obtained during validation. Obscured faces that were hidden by part of the vehicle and those with excessive glare or extreme shadows were essentially unusable. In testing, the proportion of video clips exhibiting these properties was initially very high. Adding cameras increased the chance of getting an unobstructed video clip. A new camera system using fuzzy logic helped reduce glare and shadows.  

With these changes, the system was able to get usable images for approximately 90 percent of the vehicles in a lane. With such images, the system had an FNMR of 1.6 percent and a low EER of 2.1 percent. The report concluded that the facial verification system performed admirably in a challenging environment.

My favorite part of that GAO excerpt above is when they use the term “fuzzy logic” in what we would today call “machine learning”.

Even though this particular biometric test proved not to be viable for wide scale production deployment at the time, it was tests like these that laid the groundwork for the face recognition technology we have today. So, more than 20 years later, how much has changed?

Accuracy gains have been massive

Face recognition continues to see exponential gains in accuracy not only year over year, but month over month. If you monitor the National Institute of Standards and Technology (NIST) Face Recognition Vendor Test (FRVT) website you would see a constant shuffle on the face recognition algorithm leaderboard. The shelf life of an algorithm’s performance is fleeting as each iteration of the test results in improvements significantly better than the last. With the application of machine learning face recognition performance continues to soar. 

In addition to pure accuracy, the systems have also grown more tolerant of suboptimal images. This means off-pose, poor lighting, and even occlusions have seen significant processing improvements. 

While there remains a large variance of performance across algorithms, a potential for bias, and a lingering need for minimal image quality, face recognition accuracy across the industry has grown by leaps and bounds. 

From second thought to first choice

In 1997 the prevailing biometric modality was fingerprint. If you needed a system that was easier to use without the criminal stigma of fingerprint you might look at hand geometry. Face was an afterthought. It was seen as something too new and untested. It was heavily dependent on quality lighting, a controlled environment, and a rigid user experience. 

As face recognition has become more accurate, it has also become easier to use. As a result today’s preferred modality when implementing a new biometric system is typically face recognition. The abundance and availability of face images and the frictionless capture experience means face recognition is easier to implement and ultimately easier to maintain. 

With improved accuracy and growing deployment, face recognition had one more hurdle to clear - acceptance.  

Acceptance of the technology

In 1997, face recognition sort of fell into 2 camps, sometimes simultaneously - creepy and cool. It was an interesting technology but lived in relative obscurity for the general public. People didn’t really know how to react to it and typically didn’t think much of it. 

In 2013 Apple introduced Touch ID on the iPhone 5s and just like that user adoption and acceptance of fingerprint technology skyrocketed. Face recognition followed a similar trajectory when Apple introduced a well designed face recognition capability in the iPhone X. 

As more and more people use the technology on their consumer device or within their daily life, acceptance of the technology grows organically. However, as evidenced by the graphic below, the use case for face recognition matters when it comes to public acceptance.

Moving beyond security

In 1997 face recognition was a niche technology that was mainly targeted for high security systems and government applications. As acceptance has grown, and accuracy has improved, face recognition is now being used more and more for facilitation benefits.

Using your face for payment or access to controlled areas is now seen as quicker and easier than alternative technologies. Frictionless travel and overall user experiences are improved when a user merely has to look at a system for authentication.

What used to be rooted solely in security is now finding new applications in alternate use cases such as smart advertising, Alzheimer’s treatment, dating sites, and to greet VIPs and hotel guests. 

Face as a service

In late 2016 the face recognition vendor industry was turned upside down when Amazon Web Services (AWS) introduced their own face recognition capability known as Rekognition. Suddenly the floodgates were open for its massive user community to build and explore biometric solutions. 

Face recognition companies have historically controlled the entire stack when it comes to biometric systems. If you were interested in implementing face capabilities you would have to find a vendor who offered it and then work with them to procure the necessary hardware and software required to run their proprietary system. This included installation of their matching software on your on-premise servers and working with their Software Development Kit (SDK) to capture an image on the client to submit to their backend. In addition to paying for custom hardware you also paid for per-identity software licensing costs. All of this added up quickly. 

Enter AWS and suddenly the matching algorithm and hosting were no longer your concern. All you needed to do was submit an image at minimal cost via their API. What used to be under proprietary wraps was now loose and openly available to anyone that wanted to use it. Microsoft Azure also introduced their own cloud-based face matching system known as the Face API around this time.

Traditional face biometric vendors are starting to understand the state of the market and adjusting their models accordingly. Late in 2019 Panasonic introduced their own cloud-based face recognition service signaling a shift in vendor capability offerings meant to address the growing threat of AWS and other cloud-based services.

In conclusion

The face recognition industry has grown spectacularly since we were tinkering with it in a dusty vehicle lane on the southern border in 1997. However, there is still so much more to come.

With increased usage comes increased scrutiny. Privacy rights, data protection, and ethical use are recurring and expanding themes across the industry. Algorithms, while under continuous refinement, are also under continuous examination for misuse and bias. Security concerns endure and evolve with threats such as photo morphing, liveness spoofing, and deep fakes.

While a number of questions surround the future of face recognition, there’s no doubt that the technology offers substantial benefits and staying power, even without the use of a section of wood fence.

Previous
Previous

Why Voice Cloning Matters More Than You Think

Next
Next

The Coolest Tech I Saw at CES 2020