
Deepfake expert Nina Schick joins us as we discuss synthetic media, Facebook’s latest data fiasco, and some less-than-brilliant April Fool’s tricks.
All this and much more is discussed in the latest edition of the award-winning “Smashing Security” podcast, hosted by cybersecurity veterans Graham Cluley and Carole Theriault.
Show full transcript ▼
This transcript was generated automatically, probably contains mistakes, and has not been manually verified.
Hello, hello, and welcome to Smashing Security episode 222. My name is Graham Cluley.
And there you were. So really exciting that you're here.
And it's all about the corroding information ecosystem and how basically AI-generated visual or synthetic media is the next step in the corroding information ecosystem.
But of course, when it comes to the future of deepfakes and synthetic media, it is going to be so much bigger than that.
It's actually a profound, I think, paradigm shift in the future of not only content creation but human communication.
And just as it will be weaponized by bad actors for disinformation or misinformation, like all powerful technologies of the exponential age, it's going to actually be transformative for entire industries and not only be used maliciously.
First, let's just thank this week's sponsors, 1Password and Duo Security. Their support helps us give you this show for free. Now coming up on today's show, Graham, what do you got?
It's just, you know— Anyway, carry on.
That's what's leaked out onto the internet and is now available for anyone to download and to access and to scroll through for free.
Oh, I bet they do because people have more than one account. Of course we do, yes. How else are you supposed to stalk people online? You don't use your own account.
So Facebook has had a serious data breach, which is getting it bad press at the moment, and Facebook doesn't appear to me to have actually notified the affected users, which I think is a little bit naughty.
The information which is out there right now is people's full names, email addresses, sex, location, marital status, phone number, occupation, and something called their account ID number.
So they had a bug in their software which hackers were able to exploit in order to access information which they shouldn't have been able to scrape quite so easily.
Now, this data surfaced, bubbled up on a hacking website in the middle of last year.
So there was sensitive information in there, which you probably didn't want falling into the wrong hands. But it didn't include passwords, which would've bumped up the price.
But once some people have got access to some of it, of course, they could sell it on to others at cheaper and cheaper rates.
And eventually, the first person thinks, maybe I'll get $10 for this.
So you can send it a Facebook ID. That's the string of numbers associated with your profile.
So even if you've got a Facebook username, which you probably do have, there's also a unique numeric identifier for you.
And it's actually not that hard to find out someone's Facebook ID if you want to. There are websites even which can do that if you can't work it out.
So if you were chatting to someone online, but you weren't able to get in touch with them any other way, you could have used that facility to get their phone number.
Well, now this data, this data which was feeding the bot, this data which was previously available for $30,000 reduced to $10.
That's now available for everyone at the bargain price of zero. Anyone can now go and get it.
Now, you would imagine that this is a PR disaster for Facebook, that everyone's talking about this, and that Facebook's corporate communications departments have leapt into action with a really strong message to reassure people.
And what they've said is they've said, this is old data that was previously reported on in 2019. We found and fixed this issue in August 2019.
It might have been grabbed, you know, maybe a year and a half ago. But I personally haven't changed my name since 2019. I haven't changed my email address, my sex, my phone number.
They call it old data. It still works.
Facebook, it seems to me, is trying to argue that this isn't really a data breach. It's just what you signed up for when you created a Facebook account.
I mean, there have been instances before.
Facebook did, round about 2019, I remember they left probably not as many as half a billion records, but they left tens of millions of records on an unsecured Amazon Web Bucket, which then fell into people's hands.
And so what's the danger of this, right, is not only that they know your sex and your vague location and, you know, all that kind of information, but your phone number.
And if they know your phone number, they could potentially hijack your phone number. You know, we talk a lot about these sort of SIM jacking things.
But if you were using that, if the bad guys were able to hijack your phone number effectively, which we know from the past does work, then they could break into maybe not just your Facebook account, but other accounts as well.
Now they know the phone number associated with you, which isn't good.
Now, one of the discoveries, once his phone number became public, people began to look for it in other services. And what they found was that Mark Zuckerberg has a Signal account.
He uses the end-to-end encrypted messaging service Signal, which of course is very privacy conscious.
So you can now search for your phone number rather than your email address to see if it might have been breached, which seems like a good idea to me.
But by the way, the interesting thing about Signal, I think, is I generally like Signal. I use Signal. You use Signal, don't you, Carole?
It's quite a good encrypted messaging service.
And some of the services don't require that. And here's an indication of why that's not such a good idea, because now everyone knows Zook is on Signal. So it's not good.
But anyway, for the rest of us, this leak could allow bad guys to exploit the information— social engineering, scams. So watch out for spam calls, etc., etc.
And if this is the thing which makes you want to quit Facebook, check out Smashing Security episode 75.
And essentially, a deepfake is a piece of synthetic media, that's to say, a piece of media that's been either manipulated by artificial intelligence or entirely generated by artificial intelligence.
And it can come in the form of video, audio, or images.
And the really amazing ability of AI to actually make fake media, in some cases from scratch, is really nascent and is due to the revolution in deep learning over the past decade, which has to do with the masses of data availability and the ability of computers to churn through it all.
And that's really only been possible for about 5 years. 2014 was the first big breakthrough paper.
But since then, since it started emerging on the bleeding edge of AI research, it's really hit the public imagination.
And one of the astonishing things about synthetic media is AI's ability to recreate humans. And this is manifesting in two ways. So scary.
It is scary because until now, all the best special effects or CGI, computer graphics, there's this idea of something called uncanny valley.
And that's the more you try to make something look or appear human and it's not, the more it becomes unnatural until it evokes almost a reaction of disgust in us.
So that's why creepy robots are—
This Christmas thing on a train, and it had all these humans, but there was something a bit spooky about them all. They were trying to look humans, but they weren't quite doing it.
And the fact it was Thom Hanks as well made me slightly uncomfortable as well, because I'm not a big Thom Hanks fan.
But yeah, it just felt weird because it was almost there but not quite.
You know this theory, right?
The first is the use of deepfakes or synthetic media to create entirely AI-generated people who don't exist.
And a good example of that is if you go to the website thispersondoesnotexist.com, every time you refresh the page, that's a GAN-generated image of a human who doesn't exist, and they look so real that you or I wouldn't be able to tell that that's not an authentic image, that's a synthetic image.
But as the technology accelerates, it's going to be the same with synthetic voices and also synthetic film, right? So videos.
But the second way this amazing ability of AI to recreate humans is manifesting is in its ability to clone real humans, right? And hijack biometrics.
Because all that you need to do in order to recreate someone synthetically is get the right training data.
In this case, it might be images of that person, video of that person, or audio of that person's voice, and train your algorithms on that training data in order to basically clone that person.
And here's an example of how scary quick this technology is advancing.
At the time deepfakes first came out at the end of 2017, in order to synthetically recreate someone's voice, it was really difficult.
And I was working with an AI company at the time, and we were running experiments to see how easy or difficult it would be to synthetically recreate Donald Trump.
And we had to use hours and hours of his voice for training data to train our algorithms, you know, 3 or 4 months. And in the end, we had something that sounded a bit like him.
I mean, it was pretty impressive, you know, that this was all AI-generated, but it didn't sound perfectly like him.
But I can actually provide for your show notes, we did a little article at the time with, I think it was CNBC, where we did a little quiz where it's like, can you guess which one's real Trump or fake Trump?
But now in 2021, 3 years later, and there are already companies out there who say they need 5 seconds of somebody's voice in order to be able to recreate their voice perfectly using AI.
So obviously from a—
'This has been deliberately manufactured.' And that's the other problem, I suppose, with deepfakes, is not just dodgy content, but also that things that really did happen can be excused or explained away.
Because right now the technology isn't ubiquitous.
And I should say that the other really potentially scary thing about deepfakes is that the AI is going to do the heavy lifting, right?
So creating this kind of sophisticated fake content before would have been only in the domain of an extremely well-resourced actor like a Hollywood studio or a state actor.
But AI is going to democratize it. So by the end of the decade, it will be accessible to anyone with no special skills, no big budgets, and on easy to use platforms.
To use interfaces like software, smartphone apps, things like that.
But before that happens, the malicious effects of deepfakes and synthetic media is already that it undermines trust in all authentic media.
And that's a phenomenon called the liar's dividend. And as for Trump, he already started saying that about the, you know, the grab them by the pussy tape in 2017.
I mean, in 2016, he said okay, locker room talk. Yeah, he apologized churlishly. By 2017, it was already saying it's a fake.
Because it was the George Floyd death video that united millions of people, not only in the United States, right, but around the world in protest, because that was so visceral, so powerful.
It was so symbolic.
At the time, as I was watching that, I didn't watch the whole video because it was too brutal, but as I was watching this anti-racism movement unfold and also picking up on how polarizing it was politically, I was thinking to myself, you know, it won't be long before the authenticity of that video is litigated.
And it happened two weeks afterwards, and it didn't come from some kind of 4chan troll or anonymous person on the web, but an actual African-American candidate who is standing for the House.
She has a PhD. Her name is Dr. Winnie Hartstrong, and she basically released a 23-page paper arguing that the entire George Floyd video is a deepfake hoax.
That the guy in the video is an ex-NBA basketball player who looks a little bit like George Floyd, and that George Floyd's face had been swapped onto his, and that the police officer Derek Chauvin is this retired game show host.
And she didn't— I saw it. Yeah, it's crazy. You should read the paper. The thing is, I saw it because I was obviously monitoring this phenomenon known as the liar's dividend.
And in 2020, okay, it didn't get that much currency, but she still launched a website, she went on numerous podcasts, you know, she was really outspoken on social media about her theories.
But in 2024, or in 2028, or in 2030, where there is no more trust in the information ecosystem, people will— the information ecosystem will be inundated with synthetic media, and nobody will know, we won't be able to tell what's authentic, what's synthetic.
You can see how even a video like that, which is still widely accepted as something that happened today, will just become a matter of opinion.
I don't even know where I was somewhere on my feeds. But it was, oh, see people you basically upload a picture of someone that has died is the concept, right?
Say your grandmother, and then they'll make that picture move in a way that she'll—
There's a really profound philosophical debate to be had here because, as I said, this unique ability of AI to recreate someone's biometrics is relevant even to those who are dead, right?
You literally have this ability to resurrect the dead. So there's some amazing deepfake content out there on YouTube.
Right now, there is a project which is about resurrecting James Dean, you know, the dead actor, in a film synthetically to make an entire new movie with, this is something being worked out with James Dean's estate.
And it feels like the end of civilization as we know it. Am I right to think that? Are we all completely and utterly doomed, or is there any chance we're going to survive this?
Because I feel quite negatively about it all.
And the first use case of deepfake technology, widespread malicious use case, is in non-consensual pornography. I mean, it's really similar to the origins of the internet, right?
When people are, oh, this thing will never take off, this is just for weirdos who want to share porn.
And, you know, look at us 30 years later where, you know, the internet is synonymous with—
But since then, there's been an entire deepfake porn ecosystem that's flourished online. It's a uniquely gendered phenomenon.
There is no deepfake porn of men, but every single female celebrity or K-pop star, Ivanka Trump, Ann Coulter, you name it, you can find deepfake fake porn of almost every woman in the public eye.
But alarmingly, it's not just famous women who are targeted. It's increasingly normal women as well.
So if someone uses your image, and someone uses your voice, and someone makes you do something that you're completely not comfortable with or didn't agree to, there's absolutely fuck all you can do.
And right now, if you are the victim of deepfake porn, and there were early instances where they basically put women's faces into authentic porn videos, right?
So if you wanted to have that content taken down, it was better to try and get a copyright claim from the production company that made the actual porn film.
And you know, there has to be some kind of products and services developed for individuals, because what could be more damaging than having your identity hijacked in this way?
So going back to Graham's question, though, there are obviously devastating downsides, and this technology is going to be weaponized not only against women.
I actually find the porn case study as a harbinger of what's to come, right?
Because this principle that you can clone anyone and hijack anyone's biometrics is obviously going to be used in fraud, right? Obviously going to be used for spear phishing.
Obviously, and we're starting to see the first instances of that.
There was a case in 2019 where the CEO of a British energy company was conned out of a quarter of a million dollars because he thought he was speaking to the CEO of his parent company, but it was actually fraudsters using AI-assisted voice technology.
But more than that, it is actually a paradigm change in the way that we communicate and actually the way that we perceive the world, because it's going to transform entire industries like fashion, entertainment, sport.
It's not only going to be used for bad, but it's also being used for real good.
There are companies out there that are using synthesized voice to help people who've lost their ability to speak through stroke or Parkinson's or any number of diseases, you know, to resurrect their voice, give them a voice back.
So again, it's far too basic to say, oh, this is all bad. You know, of course the technology is going to be weaponized by malicious actors.
However, to me, it's just another case study of the profound technology-led exponential changes that are happening to our society.
I mean, arguably, we're going to see more change in our lifetime than the entirety of humanity that came before us did, right?
Because a lot of our institutions, for example, our legal system, you can't deal with the challenge of deepfake porn with the existing legal system. What do you do?
How do you reconstruct society so it's fit for purpose? That's really the big question.
And so she texts me and she's like, I'm pregnant.
We convinced the people that wear the blue Brooks Brothers shirts and the sports slacks to let us put out some outrageous April Fools, which, you know, maybe today would be considered irresponsible.
But at the time, in the olden days, they were quite fun. And I agree, I think now it would be a bit remiss to do it.
I don't think I would be doing it if I was in, you know, a head of a corporate entity, you know. And it's actually the second year that Google doesn't do April Fools.
So I thought that was cool.
So I have found, however, that a number of corporations decided to go ahead and do some April Fools', and I thought we'd go through them and you guys could say success or fail.
So Sky Mobile, okay, they announced this year that they were launching a new SIM tariff for pet owners so they could continue to share more pictures of their pets online.
And they claimed that there was free data allowance for a whole year.
The automaker briefly posted, then removed, a press release on its website announcing it was changing its name to Voltswagen in an effort to promote electric vehicle purchases.
And you're thinking, okay, so it was a joke. It was a joke.
Now, the car industry influencers, right, say this is super not funny.
Thom Morton, chief strategy officer at New York advertising firm, said, "This is mainly being done by fast food brands where the stakes are lower and they need a bit of hoopla." Okay, you shouldn't be joking about electric car branding.
You wouldn't write about that in your top 10, you know, top April Fools, really, whatever.
So Deliveroo in France sent thousands of customers an email confirming an order, hilarious, get this, 38 anchovy pizzas, okay, worth 400 quid or about $500.
And this was sent to their inboxes for them to kind of receive and go, "Ah, zut alors, c'est drôle, poisson d'avril." Yeah.
Now, according to the BBC, these fake invoices included the customer's first name, not the full name, but first name. I think that would have gone, hmm.
And preceded by the words, "Excellent choice." And Deliveroo added that as a loyalty reward, 50 sachets of hot sauce were going to be thrown in for free.
One customer almost had a stroke on the BBC after receiving this fake order.
Loosen your tie.
Number one, many, many people in France, French people like me like anchovies. Okay, anchovies and bread is a normal thing.
There's a thing called pissaladière, which is like a kind of French tart with loads of anchovies on it. It's delicious.
Okay, so it's like me sending you 48, you know, I don't know, pepperoni pizzas to you. You'd be like, oh, maybe I did order that last night. You know, it's one of those things.
And also an invoice is not funny. What is an invoice funny? Like, when? Never! Like, if someone sent me an invoice for 38 hot pink toupees, I would be like, "Oh, oh, shit.
What happened? Husband? Husband?" Right?
But, you know, Deliveroo did face the music and apologize publicly, which— and it called it a failed April Fools' joke, which I think is fair.
You know, everyone's allowed to fail because the spirit of it was good. I saw—
Something more absurd, seeing as it's Deliveroo? Wouldn't that have been more amusing, or is that just my sense of humor?
All right, Graham, this is the one to give you a heart attack, so don't, don't, you know, don't start breathing crazy yet.
Okay, so tweeting to his almost 8 million followers, Piers Morgan announced that ITV had offered him a return to Good Morning Britain after his exit from the show last month, having heavily criticized remarks by Meghan Markle.
And I said, every time I think of Piers Morgan, I throw up a little bit in my mouth, I said.
And so that got— I did tag him on that reply, and that got me a bit of a dick move, actually.
You know, April Fools', maybe not a good idea until people get back on their feet, especially trying to charge them cash and 500 quid when people are trying to scrape their money together for monthly outgoings.
So, you know, tap on wrist for that, not well thought out. But, you know, I do think that I don't want the April Fools' joke to go away.
I think there should be one day, there's like, you know.
I think it can show the true spirit of humanity, that the people that power corporations, and they should be accountable for their thing, but you know, they should put their brains together, come up with something good.
You know, after all, they're being paid. Geez, mine, I could come up with 5 better ones than this off the top of my head. Ah, honestly.
Well, a password generator tool creates strong, unique passwords that are saved and filled in automatically.
Features like Watchtower alert you to any issues with your employees' accounts, giving you oversight and more security control, and you can get notified immediately when a breach occurs with domain breach reports.
Find out more. Check out 1Password for yourself at 1password.com. And thanks to 1Password for supporting the show.
At Duo Security, it's their mission to make application access more secure for organizations of all sizes.
Its modern access security is designed to safeguard all users, devices, and applications so you can stay focused on what you do best.
So, want to proactively reduce the risk of a data breach, verify users' identities, gain visibility into every device, and enforce policies to secure access to every single application?
Thought you would. Why not give your organization the peace of mind that only complete device visibility can bring? Visit duo.com to sign up for a 30-day trial. That's duo.com.
I mean, how easy is that to remember?
Could be a funny story, a book that they've read, a TV show, a movie, a record, a podcast, a website, or an app. Whatever they wish. Doesn't have to be security-related necessarily.
And there is a wonderful section of the BBC archives.
I do love the BBC archives and trawling through it, which details the 8 generations of video game consoles with lots of retro TV clips from yesteryear, going back as far as Pong, if you remember Pong in 1972.
Oh my goodness. Apparently is when that came out. The Grandstand, which is called something else in America. I can't remember what.
The Atari 2600, and then onto the Nintendo, Sonys, Microsofts, and et cetera, et cetera. Some feature friend of the show, Rory Cellan-Jones.
And that's my pick of the week.
It is a documentary series called The Terror, and it's about the real-life story, one of the greatest mysteries of naval exploration.
It was the 1845 attempt to sail the Northwest Passage. So these two boats set off from England to try and sail from the Atlantic to the Pacific.
And the two ships, Erebus and the Terror, off they set, the best ships of their time.
And these two ships unfortunately got stranded in the Arctic ice, and they had three years of provisions, and they were sure that they'd be rescued.
But after I think it was two years, they decided that no one was coming for them, so they decided they had to trek out of there and try to make it into Northern Canada.
And it is just the craziest story because no one really knew what happened. They eventually, the ships just disappeared.
Many years later, they came across the bones of some of the survivors, and it turned out that they had turned to cannibalism in the bitter end. So—
And it features just one-person conversation interviews with Fran Lebowitz. Now you may not know who she is.
And, you know, they're friends still. She's in her 70s now. And she says of him, the kind of connection we have is really rare, as true love and romance. It's not the same.
But there's something chemical about it. Something just happened.
So it's really, really brilliantly done because Scorsese's always behind the camera. You hardly see him. You see his shoulder, right? You hear an encouraging laugh.
You hear him nod her on. But it's all about Fran. And she's this kind of wit, raconteur person. And she's hilarious.
She's kind of— She'd hate me to— people are gonna hate me for saying this, but she's kind of like Diane Keaton and Woody Allen rolled into one with a sprinkle of, you know, I don't know what.
And she has this great hyperbole that comes out in her outrage, about New York, the lawn chairs that were put in New York cost $70 million. I mean, $70 million.
So she has a lot of that. Anyway, I love it.
She's just this kind of local star in a small New York pool. Anyway, go check it out. It's on Netflix. It's called Pretend It's a City with Fran Lebowitz and Martin Scorsese.
And I think it's fascinating. Good.
I mean, she's been doing that for 30, 40 years. I kind of feel like she's trapped in her look, but there you go.
Anyway, on that note, that just about wraps it up for this week.
Nina, I'm sure lots of our listeners would love to follow you online and find out what you're talking about and learn more about you. What's the best way for folks to do that?
And we're also up on Reddit, so look for the Smashing Security subreddit up there.
And to ensure you never miss another episode, follow Smashing Security in your favorite podcast apps such as Spotify, Google Podcasts, and Apple Podcasts.
And for episode show notes, sponsorship information, guest lists, and the entire back catalog of more than 221 episodes, check out smashingsecurity.com.
Hosts:
Graham Cluley:
Carole Theriault:
Guest:
Nina Schick – @NinaDSchick
Show notes:
- Stolen Data of 533 Million Facebook Users Leaked Online — Business Insider.
- Mark Zuckerberg is on Signal — Dave Walker on Twitter.
- The Facebook Phone Numbers Are Now Searchable in Have I Been Pwned — Troy Hunt.
- Facebook isn’t sorry for letting someone steal personal details of half a billion users — Graham Cluley.
- Smashing Security episode 75: Quitting Facebook.
- Deep Fakes – the coming infocalypse. — Nina Schick.
- This Person Does Not Exist.
- 'Deepfake' AI Trump impersonator highlights election fake news threat — CNBC.
- Past Google April Fools Pranks As It Cancels 2021's Over COVID — Newsweek.
- "Joke" tweet by Piers Morgan — Twitter.
- The joke is on Volkswagen after April Fool’s name change debacle — Al Jazeera.
- Deliveroo April Fool's joke backfires in France — BBC News.
- The 8 Generations of Video Game Consoles — BBC Archive.
- The Terror — BBC iPlayer.
- Pretend it's a city — Netflix.
- Smashing Security merchandise (t-shirts, mugs, stickers and stuff)
- Support us on Patreon!
With 1Password you only ever need to memorize one password. All your other passwords and important information are protected by your Master Password, which only you know. Take the 14 day free trial now at 1password.com
While remote work has been on the rise for years now, the recent rapid expansion of work-from-home culture presents new security challenges. Duo Security makes application access more secure for organizations of all sizes. Its modern access security is designed to safeguard all users, devices, and applications – so you can stay focused on what you do best.
Proactively reduce the risk of a data breach, verify users’ identities, gain visibility into every device and enforce polices to secure access to every application. Give your organization the peace-of-mind that only complete device visibility can bring. Visit Duo.com to sign-up for a free 30 day trial.
Follow the show:
Follow the show on Bluesky at @smashingsecurity.com, on the Smashing Security subreddit, or visit our website for more episodes.
Remember: Subscribe on Apple Podcasts, Spotify, or your favourite podcast app, to catch all of the episodes as they go live. Thanks for listening!
Warning: This podcast may contain nuts, adult themes, and rude language.

