
Confusion reigns after claims that data linked to 17.5 million Instagram accounts is up for sale – sparked by a vague post, contradictory statements, and a flood of password reset emails nobody asked for.
And we dig into Grok, Elon Musk’s AI chatbot, after it started generating sexualised images of women and children – raising uncomfortable questions about guardrails, accountability, and why playing the censorship card doesn’t make the problem go away.
All this, and much more, in episode 450 of the “Smashing Security” podcast with Graham Cluley, and special guest Monica Verma.
Show full transcript ▼
This transcript was generated automatically, probably contains mistakes, and has not been manually verified.
From Instagram panic to Grok gone wild with Graham Cluley and special guest Monica Verma.
Thank you so much for joining us today.
Monica, if there's anyone listening to Smashing Security who hasn't encountered you before, can you quickly sum up who you are and what you do?
I am still a hacker because it's a mindset more than just— yeah, absolutely, yes. I truly believe that.
And so one of my colleagues and I, we were invited to actually hack trains, to really hack the logic. So it was really, really fun.
This week on Smashing Security, we won't be talking about how pro-trans activists brought down a right-wing group's website and leaked the names of their donors.
You'll hear no discussion of how a man has been charged after he was allegedly hired to hack the Snapchat account of female athletes.
And we won't even mention how a hacker has leaked the database of well-known cybercrime forum Breach Forums, exposing the details of hundreds of thousands of people.
So Monica, what are you going to be talking about this week?
All this and much more coming up on this episode of Smashing Security. Well, let's take a moment now to thank one of this week's sponsors, Meter.
Now, if you've ever worked in IT and especially networking, you'll know when the network's working, nobody notices. When it isn't, everybody notices.
The problem is that most business networks are a mess of different providers, tools, dashboards, contracts, and crossed fingers.
And somehow, despite all that complexity, they're expected to be fast, secure, reliable, and magically fix themselves. And that's where Meter comes in.
Meter builds networks from the ground up. They deliver a complete full-stack networking solution, wired, wireless, and cellular, all as one integrated service.
And this is genuinely full stack. Meter designs the hardware, writes the firmware, builds the software, manages the deployment, and runs the support.
They even take care of things like ISP procurement, routing, switching, firewalls, VPNs, DNS security, SD-WAN, and multi-site networking.
In other words, fewer vendors, fewer dashboards, fewer "who owns this problem" conversations, and far fewer late night panic attacks.
Meter's approach is about real control, proper visibility, and networks that behave themselves.
And for IT leadership, it means something almost mythical in networking: predictability. If you are responsible for keeping the business online, you really should check out Meter.
So go to meter.com/smashing to book a demo now. That's M-E-T-E-R, meter.com/smashing, and thanks to Meter for supporting the show.
Now, chums, in recent days, we have witnessed a masterclass in corporate communications. And by masterclass, what I really mean, of course, is a complete and utter shambles.
So we've seen some shambles before, of course. Way back in mid-2024, CrowdStrike, they pushed out a dodgy update, didn't they?
They caused millions of Windows computers to blue screen of death. Flights were cancelled, hospitals not able to look up their records. It caused mayhem, didn't it?
And one of the craziest things that happened in that incident, there was not just an uproar, nothing was working, flights were cancelled, people were stranded.
But people were debating whether it's an IT incident or should it be classified as a security incident. Should we be even talking about it in cybersecurity industry or not?
Which to me was very interesting. I'm like, what do you mean? IT is a part of security, you know, you talk about people, tech, and processes.
Tech is one-third of that, so why would we not be talking about it?
They accidentally disconnected their data centre from the internet in October '21, causing mayhem not only to Facebook, but also to Instagram.
And that meant that employees also couldn't get into their buildings to fix it because apparently the door access systems ran on Facebook's own network and they had to go and grab some angle grinders to get into the building to go and sort out their systems.
So there are huge shambles, huge cock-ups which happen. And this week, well, it's not an omni-shambles of such epic proportions, but still far from ideal.
So let me tell you what's been happening in the last few days.
And it all started when antivirus outfit Malwarebytes posted on BlueSky that cybercriminals had stolen sensitive data related to 17.5 million Instagram accounts.
We're talking usernames, addresses, phone numbers, the full caboodle.
That was their whole post. It was alongside an image of an email from Instagram.
Just 17.5 million accounts compromised, data for sale, good luck, everybody.
And at the same time as this was going on, people were flooding onto Reddit wondering why they had received a barrage of Instagram password reset emails that they had not requested.
Well, Instagram, of course, had to respond to this. And so they hopped onto Twitter. Not Instagram, not Threads.
But they announced that they had fixed an issue that let an external party request password reset emails for some people. And they gave some advice.
Instagram said, you can ignore those emails.
So imagine you're on a jumbo jet and the pilot comes over the tannoy and he cheerily says, 'Oh, just ignore that wing falling off. Sorry for any confusion.
You can ignore that.' People obviously are going to panic. They're thinking, 'What do you mean? What do you mean? What's happened?' Right? You would, understandably, wouldn't you?
It's a cybercriminal site. We've talked about it many times on this podcast. BreachForums, right, is the marketplace where this data is apparently being sold.
That person who's selling the data claims the data comes from an API leak back in 2024.
Now, some observers reckon that Malwarebytes mentioned this 2024 connection in an email to their paying customers, but it wasn't in their public BlueSky post.
So we've got breadcrumbs of information scattered across multiple sources. We've got Reddit, we've got private emails from Malwarebytes to their customers.
We've got public posts from Malwarebytes. We've got Instagram's Twitter post as well.
All of these things, none of which are quite matching up because Instagram is saying there hasn't been a breach.
But if you notice the careful wording they use, they say there was no breach of our systems.
They're not saying there has never been a breach of our systems or this data isn't legitimate.
They're just saying this specific incident with the password reset emails wasn't a breach.
And that rather conveniently sidesteps the question of whether there was a breach, say, back in 2024.
And in all the studies that I've seen over the last 20 years, I may say, almost always financial gain is the number one motivation, followed usually by political reasons.
So activism. So I feel it's important for us to understand not only what data is being leaked, but what is being used for. And we know most of the time it's financial gain.
But do you know if Malwarebytes did any kind of information on that? Because I know attribution is very difficult, but motivation usually.
And this is frustrating, obviously, but I'm also frustrated by Instagram's response as well.
They're just saying, well, it wasn't a breach. Well, it's, well, sounds like it was some kind of security breach.
If someone was able to gain that ability, it may not have been that data was exfiltrated as a result of this. We don't know. But all they're saying is, your accounts are secure now.
It's a bit saying, I'm not burgling your house while you're carrying a TV set down the drive, right? It's technically accurate.
Yes, you're not burgling the house, you're not anymore, but it's not exactly reassuring, is it?
So I would hope for both the initial reporting of an incident to be more thorough and also for the response from the organization which is trying to explain what happened to properly represent what occurred.
I mean, Malwarebytes should have given more information and definitely not put it behind the paywall. That's sad for something so important.
We need to put together some quick snappy post which is going to go viral. We'll add an image to it as well. We're chucking it out there.
People talk a lot about setting up the war room, setting up the bridge, all the technical stuff that needs to happen, all the analysis, the forensics, and all of that is true.
All of that has to happen. But anytime an incident happens, anytime there's a breach.
And I say that from experience, having been responsible for communication from organization's perspective to our customers when things go wrong. Oh gosh, it is so important.
Whenever you have something that, you gotta give them context. What actually happened, right? What actually happened? How did we get here? That's the first thing that I'll tell them.
How did we get here? What does it mean for you? That's another thing, by the way.
Because there's one thing of what it means for general public information of whatever happened, whatever hackers are doing or whatever, right?
But what does it precisely mean for you now? What are the steps that they as a customer need to take now in order to help? And how are you helping them take those steps, right?
So I think this clarity of communication is necessary for something so crucial as a $17 million data breach. I think it's so underplayed. It is so bizarre.
Like you are an ostrich and just because you don't want to face that, you just put your head in the sand. That's basically what they're saying the customers should do.
That's like Burger King announcing a food safety update via a press release stapled to a McDonald's drive-through menu. How weird is that?
So the normal advice is that if you receive an unexpected password reset request, ignore it.
It's probably someone either phishing you or, you know, trying their luck to break into your account. If you ignore it, you should be all right.
But Instagram users, they're now playing a game of password reset roulette. So they'll be asking themselves, is this email a legitimate reset that they requested?
Is it a legitimate reset that Instagram systems accidentally sent because of an issue?
Or is it an actual phishing attempt from cybercriminals who bought all your details off the dark web?
Three possibilities, identical appearance to you in your inbox, no way to tell them apart. Instagram's official guidance is just, we'll ignore them all.
I don't know about you in all your years as a CISO and so forth, Monica, I don't know if you have an inflatable cricket bat, but I think it's an essential part of the cybersecurity arsenal.
You need an inflatable cricket bat which you can bop people over the back of the head with.
So I would give Malwarebytes a bop on the back of the head for their social media post, because shame on them for dropping a cybersecurity bombshell with zero context.
But also naughty old Instagram, bop, for issuing a terse denial that technically answered nothing.
And meanwhile, we've got 17 million users' data allegedly for sale, Reddit threads full of confused people wondering if they've been hacked, if they are being hacked, and everyone's telling slightly different versions of this story.
It's a mess. It's a mess. Okay, before we go any further, I need to share a quick word with you about one of our sponsors today, Vanta.
You know how everyone's got an AI assistant these days? Well, imagine one that doesn't just write haikus about zero-day vulnerabilities, but actually does your audit work for you.
That is Vanta. It connects to all of your tools, gathers evidence, tracks compliance, and quietly helps you prove that yes, you do take security seriously.
Vanta automates all of that.
It pulls everything together, keeps an eye on your systems, and basically makes sure you're ready for an audit at any time, which means no last-minute panic for screenshots and policies.
It also plugs into the tools you're already using and flags up issues before they become a right old mess.
So if that sounds like something that might save you from a few sleepless nights, check out vanta.com/smashing. And if you use that link, you'll get $1,000 off.
So don't forget, vanta.com/smashing, and thanks to Vanta for sponsoring this week's episode. On with the show. Monica, what have you got for us this week?
I remember doing a keynote a couple of months ago when the deepfake of Catherine Connolly came out, who ran for the presidential election for Ireland.
And that happened just two days before the presidential election, right?
I was talking about this study that showed while financial gain is the number one motivation behind deepfakes, the second of the top three is electioneering, changing elections.
But I think deepfake goes even further.
So over the last weeks, there have been actually investigations from the Australian authorities against Grok, because it seems that Grok has been really great, and sadly so, really great at creating nude images and sexualized images of women just because they were prompted by some users.
So this is not consented by those women, but also of kids.
I was reading about this, and obviously this is not the only story that has happened since deepfake has come into existence.
But the fact that you can just prompt a very powerful AI, so xAI or Grok on the platform of X publicly to just immediately get sexualized nude images of people, that is just insanity.
And what's interesting is when this happened, Grok itself, the AI released a statement. This is not a human being, mind it. It is Grok.
It apologizes for creating sexual and nude images of women and kids.
That's the way they handle the press. So of course Grok has to be the thing which actually responds to complaints.
The important thing that I want to highlight here for the audience and for the people listening to this is that Grok has no apologetic feelings, right?
It's not sentient, so it's not really apologizing, right? That's something we have to understand first, differentiate the intent versus the actuality, right?
The words versus actually the intention behind it. There is no intention of actually apologizing because it doesn't feel apologetic because it's a fucking machine.
Oh, sorry about the F word.
This is just an excuse for censorship. This is what he comes back with.
So I guess, I don't know, a poop emoji is better or he actually being completely not understanding of the fact that this is not about censorship.
Like, how can in the world this be about censorship, right?
So it feels like Elon Musk is much more amused about it than maybe everybody else is. I mean, some awful things have happened.
As you say, there have been sexualized images which have been posted of both women and children.
We have known this now over months and years that Elon wants anti-woke AI that actually doesn't shy away from politically incorrect answers, including things like creating sexualized images of women or kids without their consent.
And in the response, he did not just say that this was just an excuse for censorship. He put the Grok that creates images behind paywall, which doesn't solve the problem at all.
You're basically providing it as a premium service, basically, is what he's doing.
You can't do it via Twitter or X as he calls it, but you can go to the Grok website and use the app, I believe, to still do this even if you aren't a paying customer.
But you are absolutely right. In some ways, this is now being used really as an encouragement for people to pay for a premium service.
Here's one of the features we can offer you is the ability to create illegal images or sexualized images of people without their consent.
And so, of course, all this brouhaha in the press, and quite rightly, people have been up in arms about this, in some ways will have fed the demand for this kind of functionality, because people who want that kind of thing will now know where to go, and they know to pay Elon Musk to access it.
And I cannot understand how if anyone else were creating illegal content, the police would be going round and arresting them.
Some countries, including Malaysia and Indonesia, already blocking access to the tools, which is great.
And maybe we'll see more countries doing that temporarily, at least in the future.
But I also feel we need to ask three questions, three questions that we should be asking and holding Elon Musk to them. One is guardrails.
This has been constantly a problem with AI prompts and AI in general, but especially with Grok.
This was an example that I remember talking about in one of the keynotes I did a couple of months ago where he had actually intentionally changed Grok's newest version that allowed it to provide politically incorrect answers.
And because of that, Grok started praising Hitler and called itself Mecha Hitler. And I think these are not one-off incidents, right?
My question is, why have we not learned who is ultimately responsible for doing that? So first question is the guardrails that we need. We absolutely need those guardrails.
My biggest problem is when people talk about guardrails, they think immediately regulations, and I'm saying, no, I'm not talking about regulations to stop innovation.
What I'm talking about is actual guardrails to innovate safely in a way that it doesn't harm humanity. We absolutely need guardrails.
Second question we need to be asking them is accountability, because the buck doesn't stop with the robot.
I don't care if Grok actually apologizes, because if the buck stops there, then actually nobody's held accountable. Third is consent.
Consent has been such a big question in our community, in our society in general. Now, especially with digital tools these, how are we making sure of that consent?
And all of these questions have to be asked to these big corporations that are now holding the entire power to what AI is doing, how it is being built, what guardrails are in place.
And therefore, we should be putting pressure on the companies which advertise on these services and saying, do you really want your ads appearing alongside sexualised image of women, of young children?
Do you really want that? People who have not consented for this, or it's illegal. Do you really want to be there?
And we should also be asking of our governments, what on earth are you doing?
Could be a funny story, a book that they've read, a TV show, a movie, a record, a podcast, a website, or an app. Whatever they wish.
It doesn't have to be security-related necessarily. Well, my pick of the week this week is not security-related. My pick of the week is a— well, I suppose it's a podcast.
It's a radio show. It recently celebrated its 25th birthday. I couldn't believe that it has been going for so long.
It's been produced by the BBC since the year 2000, and it is called Soul Music. And I rather love this show.
So, each episode, which is round about half an hour long, they will choose a particular piece of music and they will tell the story of that piece of music with the voices of individuals, members of the public, sometimes musicians as well, talking about their emotional connection to that piece of music.
So, there's no presenter on the show. It is just a sort of sound collage. Of different people with their stories coming through.
And many of these stories have a real powerful emotional impact.
For instance, you'll be hearing stories of people whose lives have been changed or the meaning that exists in their heart when they listen to Joan Baez singing Diamonds and Rust or Killing Me Softly or Leonard Cohen's Marianne.
And I'm a bit of an old softie, I'll be honest with you, Monica.
I love music and I love hearing about people's really heart-touching connection with different pieces of music, even if the piece of music doesn't mean very much to me.
Their most recent episode was about the Coldplay song Yellow, for instance.
And one of the stories which I heard in that recent episode was about a guy who was close to death and he was having CPR and he ended up in a coma.
And it was only because his partner played him Coldplay that he eventually began to show signs of recovery.
Holding up a banner saying, "Your music got me out of a coma," and Chris Martin got him up on stage. And you hear all of this happen during the course of the documentary.
So it's really touching stuff. They talk about Leonard Cohen's Marianne. You get to hear some of the people behind these songs, and it's just wonderful.
Despite a lot of ambitions and dreams and all the things that I get to do and I get the opportunity to do, my pick of the week is family, and I'll tell you why.
Over the last months, literally, I've been back-to-back traveling, helping organizations all over the world.
I think I traveled 4 continents, actually 5, over the last 4 months from September, October, November, December, doing maybe, I don't know, 7, 10 gigs all on different topics of AI, cyber, whatever, you name it.
And I feel privileged and honored that I get to do that.
And every now and then, I'm not a person who has to wait for a holiday to happen, but every now and then I love to just take a break from a lot of these things and then just spend quality time with family.
That to me is literally the pick of the week because I've been literally reminiscing that quite a lot.
Before the new year started, I've been working on revamping my whole newsletter.
It was softly, quietly relaunched, the updated, rebranded version, which I call The Predictability Factor. And I'm gonna be announcing it to the world very soon.
But yeah, if you are listening to this, go check it out, The Predictability Factor. It's about building resilience and becoming resilient in the unpredictable world of AI.
But I love to take these times when I'm just offline, where I'm off the grid and I'm just spending quality time with family. And it's just so soothing for the soul.
Because ultimately, at the end of the day, even in the world of AI that we are living in, I truly, truly believe human connection and human relationships are it. They are it.
Nothing, no AI companion will ever come close to that. Go really spend time with the people that you love.
They may be 2, they may be 5, they don't have to be 100, but it will literally continue upgrading your life forever.
Because sometimes with some people, of course, they don't have great relationships with their family or they may not have family members. But you can create your own family.
You want to mend things. Because it's worth it, you get to decide. Ultimately, you get to choose to do that.
I'm going to be bringing so many amazing things there for everyone, how to become resilient in this unpredictable world of AI. Otherwise, reach me at monikatalkcyber.com.
That's one place where I put everything together. So yeah, check it out.
And don't forget, to ensure you never miss another episode, follow Smashing Security in your favorite podcast app, such as Apple Podcasts, Spotify, and Pocket Casts.
For episode show notes, sponsorship info, guest lists, and the entire back catalog of roundabout 450 episodes, check out smashingsecurity.com. Until next time, cheerio, bye-bye.
They include Shri Kumar, Karen Reynolds, Darryl Green—sounds like he should be narrating golf highlights—Vladimir Juracek, who must be absolutely ace at a game of Scrabble, Bashora, who's definitely not here to cause trouble, honest, Shan Puttick Panda Bear, still refusing to confirm their species, Matt H, with his economy class spelling, Geoff A, because one letter is all you really need, Alan Liska, Bobby Hendrix, who absolutely has opinions about guitar solos, and Billy, just Billy.
Would you like to hear your name read out from time to time at the end of the show? Well, all you have to do is sign up for Smashing Security Plus.
For as little as $5 a month, you can become part of our merry little band and get early access to episodes without the annoying ads.
Just head over to smashingsecurity.com/plus for all of the details. Now, I know not everyone can afford that, and that's absolutely fine. There's no pressure to become a patron.
You can do other things if you want to help support the show, which don't cost you anything.
For instance, you can leave us a lovely review, or you can tell your friends and pals about the show. Simply spreading the word really does help, and I really appreciate it.
So thank you once again for tuning in, and I hope you'll be tuning in again next week for the next episode of Smashing Security. Until then, cheerio, bye-bye.
Host:
Graham Cluley:
Guest:
Monica Verma:
Episode links:
- Free Speech Union website down after alleged funders exposed by trans hackers – Pink News.
- Illinois Man Charged in Snapchat Hacking Investigation – US Dept of Justice.
- Hackers get hacked, as BreachForums database is leaked – Hot for Security.
- Post by Malwarebytes – Bluesky.
- Post by Instagram – Twitter.
- Instagram denies breach amid claims of 17 million account data leak – Bleeping Computer.
- Ofcom asks X about reports its Grok AI makes sexualised images of children – BBC News.
- Musk’s Grok blocked by Indonesia, Malaysia over sexualized images in world first – CNN.
- Elon Musk shares AI images of Starmer in bikini in row over grim Grok deepfakes – Mirror.
- Soul Music – BBC Sounds.
- Smashing Security merchandise (t-shirts, mugs, stickers and stuff)
Sponsored by:
- Vanta – Expand the scope of your security program with market-leading compliance automation… while saving time and money. Smashing Security listeners get $1000 off!
- Meter – Network infrastructure for the enterprise. Get a free personalised demo.
Support the show:
You can help the podcast by telling your friends and colleagues about “Smashing Security”, and leaving us a review on Apple Podcasts or Podchaser.
Join Smashing Security PLUS for ad-free episodes and our early-release feed!
Follow us:
Follow the show on Bluesky, or join us on the Smashing Security subreddit, or visit our website for more episodes.
Thanks:
Theme tune: “Vinyl Memories” by Mikael Manvelyan.
Assorted sound effects: AudioBlocks.

