ESET’s Jake Moore used sensible glasses, deepfakes and face swaps to ‘hack’ widely-used facial recognition programs – and he’ll demo all of it at RSAC 2026
13 Mar 2026
•
,
2 min. learn

Facial recognition is more and more embedded in every little thing from airport boarding gates to financial institution onboarding flows. The widely-held assumption is {that a} face is tough to faux and that matching a dwell face to a trusted supply is a dependable identification sign.
Jake Moore, ESET International Cybersecurity Advisor, just lately put this assumption by way of a number of sensible stress exams. His experiments confirmed that the highly effective expertise can truly be each misused and defeated.
In a single take a look at, Jake used a pair of modified off-the-shelf sensible glasses that may determine individuals in actual time. He walked by way of a public area, captured individuals’s faces and in contrast them towards publicly out there on-line knowledge sources, with identification matches returned inside seconds. The names and social media profiles had been pulled from nothing greater than individuals’s glances.
This skill may come in useful if, say, a convention attendee struggles to recollect individuals’s names, but it surely’s far much less palatable when you think about what somebody with in poor health intentions might do with that data.
The second demo had a distinct spin. It went after monetary providers, turning a fraud prevention system towards itself. Utilizing AI-generated photos and freely out there software program, Jake created a fictitious face to open an precise checking account. The financial institution’s facial recognition and eKYC (know your buyer) platform accepted it as a real individual.
After proving the purpose, Jake closed the account and shared all data with the financial institution, which has since shut down that particular methodology of identification abuse. However one broader query stays: what number of monetary establishments should be vulnerable to this sort of assault?

Lastly, Jake added himself to a facial recognition watchlist at a busy practice station in London. He then walked by way of the monitored space whereas operating real-time face swap software program that overlaid Tom Cruise’s likeness onto Jake’s personal within the digicam feed. The system, which can be utilized by the UK police, by no means acknowledged or flagged him. It was as if he merely wasn’t there and anybody actively looking for him on CCTV would have seen the actor as an alternative.
There’s much more to those experiments than we will cowl right here – they’re all a part of Jake’s speak at RSAC 2026, which is due in San Francisco from March 23rd-26th, 2026. If you happen to’re on the convention, take into account attending the speak – in any case, seeing this all work towards an in-production system in a dwell setting is totally different from ‘simply’ studying about it. To study extra, together with about different ESET talks on the convention, go to this web site.
The massive image
Facial recognition programs are being deployed with implicit belief that does not match their precise resilience when somebody tries to interrupt them – even the place they solely use off-the-shelf client {hardware} and simply out there software program, similar to Jake did. Identification verification that’s solely depending on a face match clearly carries extra threat than most individuals and organizations notice.
The experiments additionally ship a message to distributors of facial recognition programs and anybody chargeable for identification verification programs. Amongst different issues, the programs ought to be examined in assault simulation settings and below different adversarial circumstances. The expertise behind facial recognition is fragile in ways in which matter when somebody makes an attempt to subvert it.

