
iPhone photos come back from the dead! Scarlett Johansson sounds upset about GPT-4o, and there’s a cockup involving celebrity fakes.
All this and much more is discussed in the latest edition of the “Smashing Security” podcast by cybersecurity veterans Graham Cluley and Carole Theriault, joined this week by special guest Anna Brading of Malwarebytes.
Plus! Don’t miss our featured interview with Sandy Bird of Sonrai Security.
Warning: This podcast may contain nuts, adult themes, and rude language.
Show full transcript ▼
This transcript was generated automatically, probably contains mistakes, and has not been manually verified.
iPhone undeleted photos and stealing Scarlett Johansson's voice with Carole Theriault and Graham Cluley. Hello, hello, and welcome to Smashing Security episode 373.
My name's Graham Cluley.
Now, coming up in today's show, Graham, what do you got?
This company takes a pretty cool approach to securing the cloud, and Sandy tells us how it works. All this and much more coming up on this episode of Smashing Security.
And I actually thought maybe during this little part of the show, as I cock-ups, I might give you a list of legendary cock-ups.
But what I'm thinking of is things do you remember the Mars Climate Orbiter, which NASA launched into space in 1998, planning to study the Martian climate and the surface?
And so crunch! It crashed. It got confused. $125 million down the drain. It'd gone all the way to Mars and then just crashed because of a simple engineering error. Oh dear. Oh dear.
All you had to do was spend more than £100 on Hoover products, they said, which was significantly less than the cost of the actual flights.
Now Hoover, if you'll remember, they'd been relying on customers being unwilling to go through the complex application process.
I mean, obviously a bit sad for those people, but I love a cock-up like that.
We wouldn't want them interfering with things. If we had a good idea, we just went for it, didn't we?
So along those lines, I was delighted to read an article on 404 Media, a great outlet for all the gossip from those side of things, about how some folks are getting just what they deserved after a blunder.
There are, and this will shock you, some grubby little scumbags out there who are using artificial intelligence to generate non-consensual sexual images of celebrities. Right?
So, you pick someone famous that you fancy, like Scarlett Johansson or Demi Moore or Lassie, or, you know, someone famous.
Instant reaction. Let's hear it.
One of these services which offers this facility is called Aesthetic Illusions. Sounds harmless enough, doesn't it?
And they have an account up on Patreon, just like Smashing Security does, folks. But unlike us, they charge $60 a month for their service.
And according to its Patreon page, will create more than 5,000 images a month. And they've already made over 53,000 images.
Now, a journalist at 404 Media, he signed up for this service, presumably just to see what was there.
So he found this and obviously he went to Patreon and said, this appears to be against your no naughty bits rule. And Patreon shut it down, right? Very good. Very, very good.
But then the problems really started because as he tells it in his report, he subsequently received an email.
And so he could still create AI-generated images of celebrities, which subscribers requested, before Patreon sort of closed the door.
In fact, in the message he said, hi friends, it looks like the inevitable happened. My Patreon's been nuked. Obviously terrible timing since I literally just quit my day job. Yikes.
But then, you know, he's got people paying $60 a month and creating 5,000 images a month. It does sound like a full-time job, doesn't it?
So he sent that email to subscribers after Patreon shut the account down. Now there's just one little problem with the email. And that is that the email was CC'd.
It didn't just go to individual subscribers. It went to something like, well, at least 35, 36 other users of the service.
And some of those email addresses, of course, included people's full names and profile pictures because you use Patreon for all kinds of things.
I support a number of Doctor Who podcasts and things like that on Patreon, because why wouldn't you?
But if I was supporting something a little bit shady, it would be the same email address and my same name, which would be used up there.
In the past, we've seen the Ministry of Defence, they've put lives of Afghan citizens at risk by using CC rather than BCC.
We've seen possible child abuse victims being exposed by the police. We've seen people who have HIV being outed.
We saw people who were bidding for bitcoin connected with the Silk Road. Sonos, they did a blunder as well. There's been a whole series of these things time and time again.
And I actually say, huzzah, isn't this fantastic? Isn't it great that sometimes the cybercriminals screw up like this? And thank goodness for screw-ups.
Because this hopefully will remind people that you've got to be a little bit more careful if you want to access your non-consensual porn, or maybe just avoid it altogether, because you could be sharing your personal information with people who, by the very definition of what they are doing, don't give a damn for people's privacy because they're creating this stuff.
So why should they take proper care of your details as well? So buyer beware. Beware is what I'm saying.
So maybe you're on a fitness journey, Graham, and want to document your body changes. Maybe you've done a bit of sexting, Carole.
All that is to say, there are some photos on my camera roll that I don't want to keep on my camera roll.
So I'll take them, use them for whatever purpose I need to use them for— wink wink, no judge— and then I'll delete them.
Because yeah, even though my phone is mine and no one has access to it, there are some things I don't want or need to be reminded of.
However, I'm doing a house renovation at the moment and people are often asking to see photos. So even though my phone is my own phone, people want to look at them.
And I say, let me get my camera out and show you. And people scroll through my phone and do they see—
And what is exciting is cross-platform tracking detection. So it will tell you now if a tracking device is moving with you, which is really good for anti-stalking measures.
Anyway, great, all good so far.
Those medical photos Graham someone took to keep an eye on a mole they had were back on the phone.
So people are freaking out, and lots and lots and lots and lots of people were posting on Reddit saying, what is going on?
After that, it says it will be permanently deleted. But lots of the reports were photos that were deleted years ago, so they weren't in the recently deleted album.
So yeah, gives the impression that you're cutting ties with the data, but not that it's going to appear back in your photo albums. But until Apple says what happened, we won't know.
We probably won't know because they never say, but they have released a fix. They don't tell us.
So they said iOS 17.5.1 addresses a rare issue where photos that experience database corruption could reappear in the Photos library even if they were deleted.
So never trust that deleted items are actually gone forever.
And huzzah, there is a fab new lucrative opportunity for you.
So you arrange a little chit chat because you know what's going on and, you know, And they say to you, you know, Anna, or to you, Graham, that they want to license your voice to become the chat assistant voice for ChatGPT's AI bot.
And this is where we covered the whole fiasco of his being ousted by OpenAI board, the dudes behind ChatGPT. This is the company that is heavily, heavily backed by Microsoft.
And at the time, the fight was all whether Sam Altman was moving a bit too fast and furiously and maybe not considering the AI commoditization potential fallout.
But weirdly, Altman was quickly brought back in, right? And OpenAI carried on as though nothing ever happened. That's my memory of it. It was crazy times.
So if Sam Altman calls you up, you guys are like, "Yeah, yeah, we're in." Well, actually, you don't agree right away.
You might kind of go, "Tell me more, please." And they say they want to use your voice because it sounds soothing, right? Sultry. Maybe, in Graham's case, like the Easter mouse.
Here's your voice." Because this is what happened to sultry voice Scarlett Johansson.
And she just issued a statement last week saying that she, 9 months ago, was approached by Sam Altman, right?
This was back in November, saying he wanted her voice to represent his new ChatGPT assistant voice. And he said her voice would be comforting to people. But she declined the offer.
And in the demo, the AI bot chatted in real time, adding emotion specifically more drama to its voice as requested.
But weirdly, all this, all this going on, it sounded an awful lot like Scarlett Johansson, the person who was approached but declined the offer of having her voice used in this way.
The other thing is CBC reported that many people commented during the demo on the strangely flirtatious moments that arose, which would not be expected in this instance.
So maybe you guys can try this out because this is what happened. Let's see if you can do this in a sultry way. Okay.
Is that a cock-up, Graham?
Altman would pursue a voice that sounded so eerily similar to mine that my closest friends and news outlets could not tell the difference." Oh dear.
If I want, and if she doesn't want it, I'll just do it anyway, because what's the— Yeah. What's going to happen?
Reference to the film in which she, you know, she voices the AI chat system. A week has passed.
What I'm finding amazing about this is one week has passed since the chatbot was demoed.
Johansson said that as a result of OpenAI's action, she was forced to hire legal counsel who wrote two letters to Altman and OpenAI setting out what they had done and asked them to detail the exact process by which they created the Sky voice.
That's what they call this AI voice, Sky. And guess what? Altman and OpenAI, rather than explain, decide to suspend the voice. Of Sky. That sounds a lot like Scarlett Johansson.
You still used my voice. Yeah. Without permission, you asked and I told you no, but you went ahead and thought you could do this anyway. So I would think good for her.
And one of the paragraphs says, okay, "We believe that AI voices should not deliberately mimic a celebrity's distinctive voice.
Sky's voice is not an imitation of Scarlett Johansson's, but belongs to a different person, different professional actress using her own natural speaking voice.
To protect her privacy, we cannot share the name of our voice talent." Of course.
He asked her, she said no, he said screw it, we're just gonna go ahead because it's going to get us loads of press coverage, and if she kicks up a fuss then that gives us more press coverage.
And look at here, they might even get on Smashing Security.
Transcendental, man.
If a security software company said they could help you reduce the permissions attack surface in your cloud by 92% with the click of a button, what would you say?
Sonrai Security just made achieving least privilege easy with the Cloud Permissions Firewall, a scalable solution that easily restricts excessive permissions from human and machine identities, quarantines unused identities, and disables unused regions and services without any disruptions.
Even better, the solution maintains this level of risk reduction by automatically enforcing least privilege policies as new identities are added to the environment. What's better?
The fact that you can test drive the Sonrai Cloud Permissions Firewall for free for 14 days. Go to smashingsecurity.com/sonrai. That's S-O-N-R-A-I.
And thanks to Sonrai Security for sponsoring the show.
Kolide Device Trust helps companies with Okta ensure that only known and secure devices can access their data, and that's what they're still doing, but now as part of 1Password.
So if you've got Okta and you've been meaning to check out Kolide, now's a great time.
Kolide comes with a library of pre-built device posture checks, and you can write your own custom checks for just about anything you can think of.
Plus, you can use Kolide on devices without MDM, like your Linux fleet, contractor devices, and every BYOD phone and laptop in your company.
Now that Kolide is part of 1Password, it's only going to get better. Check it out at kolide.com/smashing to learn more and watch the demo today. That's K-O-L-I-D-E dot com/smashing.
And thanks to Kolide for supporting the show.
However, this process is often time-intensive and costly. Vanta automates up to 90% of compliance work, getting you audit ready quickly and saving you up to 85% of associated costs.
And Vanta scales with your business with a market-leading trust management platform to help you continuously monitor compliance, unify risk management, and streamline security reviews.
Join 7,000 global companies like Atlassian, Flow Health, and Quora that use Vanta to build trust and prove security in real time. Watch Vanta's on-demand demo at vanta.com/smashing.
That's vanta.com/smashing. And thanks to Vanta for sponsoring the show.
Could be a funny story, a book that they've read, a TV show, a movie, a record, a podcast, a website, or an app. Whatever they wish.
It doesn't have to be security-related necessarily.
And it stars Aidan Gillen, who you may remember from Queer as Folk back in the day, and Charlie Cox, who some people may know as Daredevil, and Claire Dunne.
And it is utterly binge-worthy, unless you have kids. Because you are watching the machinations amongst a family of criminals as they jockey for position and power.
In a struggle with another gang of criminals.
And apparently, it is actually inspired by a real-life and ongoing feud happening in Ireland between two criminal gangs, which I won't name in this podcast, because I don't want them showing up on my doorstep.
That has resulted in multiple deaths. I think it's very well written and directed. Brilliantly acted. Wonderfully violent.
So if you fancy something a little bit adult— You may want to plot out on a piece of graph paper the family tree, though.
I found it very difficult working out who was a brother of who, and who was a son, and who's the uncle, and hang on, how are they related? So, get your pen and paper out.
But really, really good. Really enjoyed it. Kin, which is on BBC iPlayer if you can't find it elsewhere. Anna, what's your pick of the week?
It looks like a massive porthole and a video camera which broadcasts a live stream from one side to the other.
So people in Dublin can see people in New York, and people in New York see people in Dublin. They can interact, they can make faces.
And I've got friends in New York, and if the portal was nearer to me than Dublin, I'd definitely give it a go. I'm not going to go to Dublin for it.
However, we can't have nice things, so a man mooned the camera. Someone took drugs on camera, an OnlyFans model flashed the portal, and it was closed down.
They've implemented more safety features, so if you hold a phone up, or if you step onto the exhibit, it will blur the screen.
But anyway, I think it's a nice thing, and I think people shouldn't ruin nice things. So I'm glad it's back.
So I come home after, I don't know, it's probably about 7 o'clock on Sunday, happy but so exhausted from the day. But the smells from the kitchen were out of this world.
He made me chicken rendang. It's slow-cooked chicken and braised in coconut milk and full of spices and herbs.
And you've gotta make it the day before and let it sit and all this stuff. So good. It was so good. Mind-blowing. So I was, you know, where'd you get this? And he was, Felicity Cloake.
Right now she does a food column called How to Cook the Perfect from The Guardian.
I'm sure I've mentioned her on this in Pick of the Weeks in previous years, but she's a fantastic resource for those of you to kind of go, I want to try and make this, but it's kind of your first time doing a dish.
Because she'll kind of go through about 5 different approaches to how people make it, and then she'll take the best bits for her and why she's taking it, and then put it into a whole new recipe.
We have a cloud security expert in the hot seat, and this guy knows his security onions. Sandy Bird. Sandy is the co-founder and the CTO at Smashing Security.
This is the company that helps you protect your data by securing all the stuff in the cloud. But these guys take a different approach and we are gonna find out just how it works.
But first, welcome to Smashing Security, Sandy.
I spent the early parts of my career doing analytics on log data to find, you know, odd patterns and threats and things of that nature.
But as I moved along my career path, I spent a lot of time, especially when I was at IBM after they acquired Q1 Labs, looking at all of the aspects of security, identity security, application security, all of these things.
And one of the things that I thought was so intriguing about cloud and our transition to cloud was that most of the controls were identity-based in terms of keeping the good guys in and keeping the bad guys out from that perspective.
Sometimes in cloud, it gets a little gray as to where the authentication is happening and exactly how it's happening, especially when you have things like resource policies.
Absolutely, it's the key thing that controls that whole world. And so I just found it really intriguing.
I thought we could do a better job for the first time in cloud than we maybe did in enterprise, and that's what started Smashing Security on a better model for access.
So can you tell us a little bit about that?
I had this hypothesis probably 4 years ago when we started Sunree that because we had all of the audit data for every identity and what it was doing in the cloud, and we had a representation of all of the access policies and role assignments and things depending on which cloud you're in—they're called different things, but basically the mapping of the permissions to the identities—we could correlate the two together and get a perfect picture of what it should look like, and then we could basically correct it all and make it perfect.
And after spending 4 years of my life trying to get people to do that, I realized there's a couple fundamental flaws in it.
One is that at the scale of cloud, you have many different teams building apps for different purposes, and the centralized—they call them different things—cloud infrastructure team, cloud ops, cloud, they have different names, but the central team that governs that whole cloud infrastructure didn't have control of the development teams and what they were doing.
So they were basically, you know, they could build the perfect policy, they could put it in a Jira ticket and say, you have to go fix this.
But if the team didn't do it, it just never got corrected.
And we would see customers that after a year had just not corrected very many of these kind of least privileged problems that they had.
And we had this—this is a good story, Carole. I had this one very successful customer: they had 2,000 of them that they fixed in a 10-month period.
But I think we measured it—I think they had more than 2,000 new identities at the end of the 10 months. So you know what I mean? They're just falling further behind.
I think a lot of companies must be in that position because I suspect not all IT people are extremely au fait with actually tackling these things on their own, right?
They may be tasked with the job, given no resources to actually do it, and that can lead to you chasing your tail a little bit, do you think?
It doesn't really come last, but the reality is there's a lot of different priorities for these teams and their goal to get stuff out the door and be innovative, and they should be gold that way.
And so we started to do some measurements of this, and I think one of the key things—you kind of say different teams—the longer people were in cloud, the worse it got.
And we kind of did this interesting data report where we saw that.
But even if you were in cloud for 5 years, say, if you were there for 5 years, the amount of, we'll call it cyber litter in your cloud, you know, identities that are there that have permissions that haven't been used in 2 years, that number just grows and grows and grows.
And then the same thing with these excessive permissions, right?
People give them, they didn't know how to get the workload to work, so they gave it star permissions, but then they don't use it anymore.
And 2 years later, it's still sitting there. And these numbers just kept getting bigger and bigger and bigger the longer people were in cloud. So yeah, big problem.
You give them too many permissions. They have access to stuff they shouldn't have access to. And then to kind of clean that up is a bit trying to clean out my email at the moment.
You know, I just look the other way.
Here's a perfect list of the 10,000 identities that haven't been used because it's a mix of people and workload identities, right? So you have both sides.
And so it was this very large list and no one would do it. And the reason it was perfectly automated, could be done in a minute, you know, but no one would do it.
And you say, well, why won't you do this? Well, I'm afraid you've got it. The fear, the fear comes in. There's so many examples they give us, Carole.
Sometimes it's, as you say, those people have left the company. I don't know how that thing's configured.
And if we take it away and someone asks me to put it back, I can never put it back. I don't know how to do that, right? Or, you know, that's a break glass account.
We don't want to delete that one. Okay, well, do you have a list of all your break glass accounts? I don't think we do. You know, and so there's lots of excuses.
And so we really had to invent a different way, which is really what we're into now, which is let's not try to make everything perfect.
Let's get people to a great state automate it, do it super quick, and then have a way to get the permissions back if you need them.
So first question would be, what question were you really hoping to answer in putting out this research?
And grant it back to the things that did. And so we needed to measure that to understand what that gap was.
The second part of that though was step 2, which was, okay, well, if a new workload gets built tomorrow or something that's old needs it back, you want to wake up one of those old identities.
How many times would that happen to an average team? And so we needed to measure that.
So we, because we did that, we took a large set of our public cloud customers and we kind of did these overall statistics across them so that we could get a bunch of these numbers so we could give people confidence that, you can do this, it solves a massive gap, but the burden on the team on the next day will be low.
And so that's why we built the report. Some of the numbers that came out of it are kind of surprising. They're higher than I would have thought of.
I think you can divide them between things that I assumed that were correct and things that I assumed that are wrong.
The whole point of these things is to build workloads that do amazing things and build products on top of. So they should have a lot of workload identities.
And by average, you would see that split, you know, 20/80, 20% people, 80% workload identities. And I think that's a good mix.
One is these sensitive permissions that are granted to an identity that are not used, and then identities which are not used at all.
So across all identities, 92% of the identities sitting in the cloud have at least one of these very sensitive permissions, which it's not using.
You know, we always joke that we give humans too many credentials. Well, apparently we give the workload identities way too many credentials.
They're way worse than the humans are in this particular case.
You know, these apps exist and they run on these public cloud infrastructures.
You know, somebody's gonna book a ride, we gotta put a record in a database so we can schedule their driver.
But the developer probably couldn't get it to work the way that they wanted to.
And if they were using a cloud service for that database, BigQuery and GCP or DynamoDB and Amazon, maybe they just gave it the star permission for the entire database.
And then they try it and it worked and they're like, great, it works. I'm gonna move on to my next thing. Right.
But they didn't need all of the, they didn't need to delete the database, they didn't need to create a new database, they didn't need to do all those things, but they gave it too many permissions.
And so when an attacker gets ahold of that, well, they can destroy your world, they can ransomware the data, they can do all these types of things.
And it's these workload identities that are so over-permissioned that way.
And if he's— yeah, and I just go, don't worry about it.
Anything like that in the report where you thought, look, you know, what's frustrating you in terms of what IT people aren't able to do or aren't doing yet that would help them immensely?
So I go back to this unused example, and when you split that report between machine and humans, we find out that 12% of the human identities that are in the cloud are unused.
Well, that means, you know, there's almost 90% that are used. And so that's not perfect, but it's pretty good.
And what that probably means is our, we always call it the leave or move or joiner problem in humans.
When they leave the company, when they join the company, when they move groups, we're probably doing an okay job of their permissions and getting them at least in the cloud.
And when they're not there supposed to be there anymore, we're removing them. And so that way, reason you don't have that many unused human identities.
But the non-people identity, this is one of the highest statistics in the report, like 88%. Of the 61% of completely unused identities were these machine identities.
And what it probably means is there's no process in companies to clean this stuff up. You know, it's not—
And so I think that's why there's so much of that kind of litter that gets left over.
You know, how do I go about it? What would be their first steps to take?
There are centralized controls in all three clouds where you can actually cut off the craziest permissions that don't need to be there.
You know, I joke Azure has this really interesting permission that allows you to take a disk volume on a running virtual machine and make it a public URL on the internet.
I don't know why you would ever want to do that, but the permission exists.
If you don't want pre-signed URLs of your analytics workloads and machine learning workloads, let's block those centrally. And the clouds all have ways to do that.
The other thing is that all of the clouds have inventories where you can see how long it's been since identities have been used.
You should be looking through that once in a while and cleaning that up and having a process for that.
Have a hack day for 4 hours one morning where you just get the team in a room and say, we're going to look at our one account here and we're going to look at 50 identities and remove those if they're not used.
And that's a good way to start if you don't have tooling to help with this stuff.
If you're the type of company that can afford tooling, then you look at something like Smashing Security and we can automate huge amounts of this.
We can divvy the work out to the teams. We can cut these things off with this cloud permissions firewall really quickly. There's ways to do it at scale and easier.
But even if you don't, you're a small company, you're a startup, there's still something you can do just by actually taking a look at those identities that are unused and centrally blocking the things that you really don't want to happen in your cloud.
And if it's something you're interested in trying to clean up, we've got some links to our website and ways people can do free trials of some of the tools that help manage this.
You can read Sonrai's latest research and try its cloud permission firewall for free, 'cause you never know, you might be leaving a bit too much on show.
So go to smashingsecurity.com/sonrai. That's S-O-N-R-A-I. And Sandy Bird, what a pleasure. CTO and co-founder of Sonrai Security. For coming on the show. Thanks, Carole.
What's the best way for folks to do that?
Follow Smashing Security in your favorite podcast apps such as Apple Podcasts, Spotify, and Pocket Casts.
For episode show notes, sponsorship info, guest lists, and the entire back catalog of more than 372 episodes, check out smashingsecurity.com.
Hosts:
Graham Cluley:
Carole Theriault:
Guest:
Anna Brading – @annabrading
Episode links:
- When NASA Lost a Spacecraft Due to a Metric Math Mistake – Simscale.
- The worst sales promotion in history – The Hustle.
- Nonconsensual AI Porn Maker Accidentally Leaks His Customers’ Emails – 404 Media.
- UK’s Ministry of Defence fined after Bcc email blinder that put the lives of Afghan citizens at risk – Hot for Security.
- £200,000 fine for exposing possible child abuse victims in classic Cc/Bcc email blunder – Graham Cluley.
- Apple’s Photo Bug Exposes the Myth of ‘Deleted’ – Wired.
- OpenAI Voice Scandal: Sky’s Fall From Grace – YouTube.
- How the voices for ChatGPT were chosen – OpenAI.
- As AI becomes more human-like, experts warn users must think more critically about its responses – CBC News.
- What We Lose When ChatGPT Sounds Like Scarlett Johansson – The New York Times.
- Scarlett Johansson’s Statement About Her Interactions With Sam Altman – The New York Times.
- Kin TV series – Wikipedia.
- Portal connecting Dublin and New York ‘reawakens’ under new restrictions after ‘inappropriate behaviour’ – Sky News.
- How to cook the perfect chicken rendang – recipe – The Guardian.
- Smashing Security merchandise (t-shirts, mugs, stickers and stuff)
Sponsored by:
- Sonrai’s Cloud Permissions Firewall – A one-click solution to least privilege without disrupting DevOps. Start a 14 day free trial now!
- Vanta – Expand the scope of your security program with market-leading compliance automation… while saving time and money. Smashing Security listeners get 10% off!
- Kolide – Kolide ensures that if your device isn’t secure it can’t access your cloud apps. It’s Device Trust for Okta. Watch the demo today!
Support the show:
You can help the podcast by telling your friends and colleagues about “Smashing Security”, and leaving us a review on Apple Podcasts or Podchaser.
Become a supporter via Patreon or Apple Podcasts for ad-free episodes and our early-release feed!
Follow us:
Follow the show on Bluesky at @smashingsecurity.com, or on Mastodon, on the Smashing Security subreddit, or visit our website for more episodes.
Thanks:
Theme tune: “Vinyl Memories” by Mikael Manvelyan.
Assorted sound effects: AudioBlocks.

