Facial recognition fail allows politician’s kids to access his laptop

Facial recognition fail allows politician's kids to access his laptop

Not all facial recognition is successful at telling the difference between the genuine face of an authorised user and, say, a photograph of the user.

Someone who has realised that in recent days is Matt Carthy, an Irish politician serving as an MEP at the European Parliament.

With new elections for Europe just around the corner, Carthy can’t be the only politician liberally distributing photographs of his smiling face through the letterboxes of potential voters.

Sign up to our free newsletter.
Security news, advice, and tips.

But in Carthy’s case it wasn’t a stranger that was able to subvert the facial recognition on his HP laptop, but instead his kids.

The problem with using your face or your fingerprints as the magic key that will unlock your device is that these are things which simply aren’t secret.

If your laptop or smartphone isn’t sufficiently adept at telling the difference between you and a mugshot of you, or your fingerprints and a 3D print of your fingerprint, you might be putting your sensitive data and privacy at risk.

Not a great advert for HP’s facial recognition.


Graham Cluley is an award-winning keynote speaker who has given presentations around the world about cybersecurity, hackers, and online privacy. A veteran of the computer security industry since the early 1990s, he wrote the first ever version of Dr Solomon's Anti-Virus Toolkit for Windows, makes regular media appearances, and is the co-host of the popular "Smashing Security" podcast. Follow him on Twitter, Mastodon, Threads, Bluesky, or drop him an email.

One comment on “Facial recognition fail allows politician’s kids to access his laptop”

  1. Robin Davies

    Back in the 1980s there was a lot of research into signature verification as a means of proving identity. Many devices were used, the most successful of which was a "ballistic" pen. But in the end they all failed, for a very simple reason: if you set the system tolerances to ALWAYS recognize the right person, it will also sometimes recognize the wrong person. If you set them so that the system NEVER recognizes the wrong person, it sometimes also declines to recognize the right person. Neither is acceptable, so the research ended. Doubtless facial recognition suffers from the same dilemma.

What do you think? Leave a comment

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.