
In episode 459 of Smashing Security, we dive into a chillingly clever account takeover attempt targeting WordPress co-founder Matt Mullenweg – involving MFA fatigue, real Apple alerts, a convincing support call, and a phishing page that oh-so-nearly worked. If a famous techie could have this happen to you, can you be sure you’re immune?
Plus: would you donate your lifetime medical history to science if you were promised anonymity? We unpack serious concerns around UK Biobank, where “de-identified” data may not be as anonymous as you think — and how surprisingly little information it takes to reveal everything.
And! Human-powered “AI”, and a punishment worse than prison: eight hours on the RSA expo floor…
All this, and much more, in episode 459 of the “Smashing Security” podcast with cybersecurity veteran and keynote speaker Graham Cluley, and special guest Paul Ducklin.
Show full transcript ▼
This transcript was generated automatically, probably contains mistakes, and has not been manually verified.
Legal experts at the SEC are calling the penalty proportionate and corrective. Former RSA attendees are calling it ransomware.
Hello, hello, and welcome to Smashing Security, episode 459. My name's Graham Cluley.
The white car, it's on the right.
We'll be hearing more about them later on the podcast.
This week on Smashing Security, we won't be talking about how a Doge employee stole Social Security data and put it on a USB drive.
You'll hear no discussion of how a foreign hacker is said to have broken into the FBI in 2023 and compromised the Epstein files.
And we won't even mention how a new font rendering trick can cause AI assistants to not spot malicious commands hidden in seemingly harmless HTML.
So, Duck, what are you going to be talking about this week?
All this and much more coming up on this episode of Smashing Security. Graham.
That's what we're talking about.
All you've got to do is paste in a news article. So it could be about a breaking threat or an internal policy update. It's all done. Multilingual, interactive in seconds.
I mean, he did do something extraordinary with WordPress.
He built something which is used by some astonishing statistic, the number of websites out there which are powered by WordPress, either WordPress.com or the open-source equivalent, something like 40% of the internet is using WordPress technology, I believe.
He still managed to alienate almost everyone in the WordPress community over the years as well. He's a bit like Linus Torvalds. He can be a little bit prickly, I think sometimes.
He divides opinion. But he's a big cheese, isn't he? And turns out he uses Apple devices. So he's got an Apple Watch, he's got an iPhone, and he's got an Apple Mac.
And he also does something which not many people do with their Apple devices, which is that he has enabled lockdown mode.
That is an optional feature of Apple's operating systems, which means that you shouldn't laugh, Graham.
So lockdown mode, for anyone who doesn't know, it significantly restricts what your device can do, which is great news in terms of making it more secure, puts you at less risk, but it also makes your device really bloody difficult to actually use as a computing device.
So much so that Apple actually specifically does not recommend it.
They say, "This is designed for very few individuals." They would hate the vast majority of people to turn this feature on.
But if you are a journalist who's working on some geopolitical or if you've got super secret sources who are in countries where there are authoritarian regimes.
So don't knock snooker journalists for the risk that they may be under.
Anyway, despite having lockdown mode enabled, Matt Mullenweg was still almost completely conned by an attack recently, which he has written about on his blog.
So if you're sitting there thinking, well, this could never happen to me. I could never fall for a trick this. Just stay listening because maybe you could.
Here is what happened to Matt. So recently in an evening, there's Matt.
He says that suddenly his Apple Watch, his iPhone, and his Mac, they all lit up with a message prompting him to reset his Apple ID password.
Right, this is the thing built into the operating system. It's popping up saying, you need to reset your password. And he says it came out of nowhere.
So he hadn't done anything to trigger it. And he's thinking, well, why am I getting this notification telling me that there's some kind of problem with this?
And what was happening was that somebody was hammering Apple's own legitimate password reset process. And you can do this.
You can go to Apple and you can say, look, this is my Apple ID. This is effectively my email address. I can no longer access my account. Please reset it for me.
And if you do that, Apple will send this notification to your devices, basically saying, do you want to reset? If you do, this is the process which we want you to go through.
So he was being battered by somebody who was probably hoping that eventually he'd get frustrated by all these hundreds of messages and just tap allow.
And this is a technique which is called, well, some people call it MFA bombing. It relies upon MFA fatigue. You must have heard about cases of this kind of thing happening, Duck.
Yeah, the idea that you'll get a warning, you'll get a warning, you'll go, that's not me, that's not me, that's not me, that's not me.
And then eventually you'll be at a low moment or you'll think, oh, well, maybe it is me. Or maybe you'll go in and fiddle with something and think, well, that must be mine.
And you click allow and everything goes quiet.
Or you think, well, maybe it is legitimate. I just want the problem to go away. I will press every button until I find the button which makes these things bloody well stop.
Instead, they took things up a notch.
So it turns out that the people who were trying to trick him into giving them access to his account contacted Apple support themselves, pretending—
And because they were doing this all through Apple's actual real support channels, that interaction generated a real case ID number within Apple Support.
They're going to send notification emails to the people whose email addresses they have associated with that account, right?
So they're going to send notification messages, and that's what happened. So real Apple notification emails arrived in Matt's inbox.
And all of those messages, of course, were not phishing emails. They were properly signed from Apple's actual email servers with Apple's domain.
These weren't spoof emails, they weren't blocked by spam or anything else.
They're completely legitimate emails to Matt about a completely fraudulent request from the hackers to gain access to his account.
He gave Matt some genuinely sound advice, like you should check your account, make sure nothing has changed, think about updating your password, have you got two-factor authentication enabled?
And then Alexander says, "Look, okay, so what we're gonna do, clearly this was a bogus support request which came in." They said, "Clearly someone is trying to phish you." "So what we're gonna do is we're gonna clear this bogus support request which has come in.
We can cancel it.
What I'm gonna do," he said, "is I'm gonna text you a link and you can then confirm your identity and we will cancel the support request." So the link arrives via SMS.
Pointing to a URL at audit-apple.com.
He put up some screenshots of it on his blog entry as well.
And it displayed the exact case ID, the number which he had had referred to in the real Apple emails which had been sent to his inbox.
There was even a fake chat transcript shown on the page, a record of the scammer's own conversation with Apple, presented back to Matt as evidence that someone was attacking his account.
This is just because we're reaching out to that mobile number as of right now, and we can confirm you are the person that does have access to this mobile.
As I stated, we've initiated the cancellation request, but for it to be processed, it does require an original account holder or a legacy—
And when he did that, he got exactly the same results. So nothing was being validated. The whole thing was a sham.
He saw the same kind of page and he thought, well, hang on, you could enter anything here. And so he actually called Alexander's bluff. This is impressive.
So this is obviously phishing, right?
Yeah, I've done it with my own site as an experiment. 5 minutes later, I had a pixel-perfect, JavaScript-perfect clone of my own site. It was exactly the same code running.
And the only difference was when you filled in the form and clicked submit, it went somewhere else. And you could even set a believable decoy page to land on afterwards.
It may have been an AI which was clever enough to actually have the entire conversation with Matt, because there are some demos which ElevenLabs, for instance, have put out where you can be chatting to a support chatbot, which is remarkably convincing.
And it wouldn't be a surprise, maybe.
They're not going to call you out of the blue. Always check the URL.
If you receive a password reset prompt that you didn't request, then that should be a huge red flag. So approve nothing. Go to your settings yourself. Log in yourself.
Multifactor authentication, it definitely can help.
But of course, there are these sort of man-in-the-middle attacks, aren't there, where you can actually have the multifactor authentication token taken from you, and instantly the bad guys can use that token that you've entered to try and access your account.
Just in the last week or so, the guys at Signal, which is the encrypted messaging app, they've put out a warning that there are messages going around claiming to come from the Signal security support chatbot.
And it says, we've noticed suspicious activity on your device.
Don't tell it to anyone, it says, not even Signal employees. Just send it to this number when you receive it.
It basically becomes a second job, doesn't it?
There's even a hardware buyback program if you've already got kit from another vendor.
Although they have a CEO, and that is Professor Sir Rory Collins. We'll come back to him in a moment. They're associated with the academic medical research ecosystem.
And to be quite fair, the idea is that this is not something that you just get forced into.
You volunteer to hand over via this group all your medical data throughout your life, as much as you choose, up to and including everything.
So that they can anonymise it or de-identify it, as they call it, right?
And collect it together and make it available under apparently controlled circumstances to medical researchers who want to do long-term research.
They think, well, there's no privacy problem as far as I'm concerned, because you're going to be careful.
But if this helps medical science, something like half a million people have volunteered to help this study of diseases and things.
Or maybe some of them are young enough that they haven't thought about how specific some of the conditions they might have in the future will be to them.
You kind of feel maybe I should give something back. I absolutely understand that. And you'll remember that time I had that automotive accident.
And to this day, all I have to show for it is some scars where the operations were done.
I could imagine, given the fact that I was in dire straits in the middle of nowhere and a helicopter descended from the sky and whisked me off to one of the premier teaching hospitals in the country and basically restored me to pretty much as good as new.
If somebody said, you know what, in your operation we use stainless steel screws to fit all the broken bits back together.
Sometimes we use titanium screws, but they're much more expensive. What we want to do is see what is the sort of risk-reward of that.
I would probably go, you know what, that would be really helpful.
I wouldn't want to begrudge the person, but I'd like to think that I would think twice, thrice, or even four times about saying, okay, I'll sign up for this thing so that you can use what happened to me way back then when I had the crash, but also all the other medical data that applies to me for every doctor surgery visit, every hospital visit, every surgery, every bit of medical treatment, possibly even including mental health treatment that I have for the rest of my natural life.
That to me would feel like I was probably letting myself in for something for which nobody had really thought through the possible consequences fully.
And that, sadly, is what seems to have happened in this case.
And I don't think they vet that they're great programmers or that they have experience in software engineering or that they have experience in cybersecurity or how to use GitHub properly, etc., etc.
And also, people who've signed up for this, some of them might be surprised to know that these elite special group of trusted researchers already apparently number 20,000 people all around the globe.
You have to trust their computers as well, that they haven't got data scraping malware on them. You have to trust the network they're on.
You have to trust the employer or the owner or the influencer of the institution where they study. Which might be quite hard to determine.
So what happened is that for good academic reasons, it was decided that anyone who's using this data and who's done their research, obviously they'll write software code which will process it and manipulate it.
And it's very important in scientific research of this sort, medical or otherwise, that other people can repeat your experiments if they're given access to the data.
To see whether you cheated or made a mistake with the results.
And you tell the AI, grab all my code and upload it, and then upload the PDFs and publish the report and put out the press release.
And so, as you can imagine, in numerous cases, the code and at least some of the data with it, because it was in the same directory, got scooped up and uploaded to GitHub where anybody could download it.
Miggins, you know, from 13 Trellis Avenue. That's not going to happen, is it? So that's all right.
Just some snippets of your history, just enough critical information, and we'll see how little of it we need until we do a search and bang, we get one record.
And as soon as you get down to one record, then you know the magic anonymized ID that ties that record to all the others, which is the whole purpose of this project, right, that you can tie this surgery to that treatment, this counseling to that behavioral change, etc., but without knowing who it is.
And with this particular volunteer, they had the month and year in which she was born, which I think for most people in the UK, given the number of breaches so far, we should consider a matter of public record.
So she had a lot of medical history in there. Oh, with just that information, let's say the date of birth. Let's consider that free of charge.
Let's just assume to a first approximation everyone in the UK has a public date of birth.
You've re-identified them. You can then go through the database and replace their magic number 10538 or whatever it is with the text Alice of Trellis Avenue. Done.
So the boss of UK Biobank, that CEO, what's he had to say about this?
But it would require someone to have specific matching information from another source. That is what The Guardian has done.
The participant featured chose to give specific personal health information. The Guardian then cross-referenced this.
This is not a failure of our approach to data confidentiality because the participant shared the information to identify themselves.
I mean, it would be difficult, wouldn't it, finding out when someone else has had an operation? I mean, unless you handed it over?
Maybe a cybercriminal who's made millions off ransomware and has got plenty of money and time to burn, or a state-sponsored attacker who's funded to do this as a job.
I would imagine that there are very, very, very many people in every country of the world, including the UK, who, when they have been in hospital for some serious specific operation, have received get-well-soon messages on social media from their chums.
Wouldn't you think that?
Or you might notice if there's a picture in the ward, you might be able to reconstruct what it is. But here's an even easier way to do it.
So if you start by going, okay, let's focus on month, year, C-section, right?
You also have the issue that I believe there are something like 100,000 operations in the UK each year for hernia. That's the most common operation, apparently.
So suddenly the fact that this sounds like a very unlikely coincidence that an attacker could ever guess is not true.
But imagine if they actually had data that they had bought off the darkweb from an earlier breach from a healthcare institution that had been hit by ransomware.
Data had been stolen, the ransom wasn't paid, and the crooks decided to sell it on. Just imagine that on its own.
You would think that's quite annoying for those individuals who everyone now knows they had trouble with their throat in such a month year.
That would be bad enough, but that alone could now be enough to de-anonymize all of those people. And that's something like up to 50,000 people a year in the UK.
So Professor Srivouri's disclaimer, I don't think he's being disingenuous.
I think he may just genuinely not realize how easy it is to stitch together little bits of data from lots of sources.
I mean, Graham, if you think that we now have enough processing power around the world and enough data storage to build statistical inferencing models— some people call them LLMs or AIs— such that you can essentially reconstruct the full text of all the Harry Potter novels by steering this thing in the right way to guess what comes next.
If that's possible, then piecing together this guy had a tonsillectomy in March 1985 and also had a hernia operation in July 2006 and was born in March 1963.
The idea that you can't use that with this data to de-anonymize the person seems to be a bit of a forlorn hope.
Because nobody's actually been identified against their will so far, have they?
They were saying, well, we went to a volunteer and we happened to have one piece of information that they volunteered.
Because obviously, they didn't want to go on the darkweb and say, hey, let's see if we can buy illegal data and do it that way. Which I kind of suspect they could have done.
And I kind of suspect they wouldn't have got Alice from Trellis Avenue's data. They might have got 10, 20, 30, 50, 100 people's data.
So I think the problem here is not that people were forced to hand over data that then got abused by cybercriminals.
I just think that Professor Sirore may have underestimated the extent to which the de-identification of the data is reversible.
The fact that nobody's been caught doing this yet, it is not the same as it can't be done.
And we have to worry about this because, of course, the Health Service more and more wants to use our data, and it wants to give it to some companies who are promising to do remarkable things, which they say will help make our Health Service more efficient.
And I think there are understandable concerns about how well that data is going to be looked after. It sounds like it wasn't done well enough in this case.
Because you may feel so strongly about the value that you got from something like the National Health Service that you feel it is actually worth cybercriminals getting at your stuff potentially in the future, that you're prepared to take that risk because the benefits to other people from learning from what went right and wrong in your treatment, that it just could all work out.
But don't be seduced by the fact that, hey, this is absolutely fantastic. The de-identification or the anonymization of the data is bound to be enough.
And don't forget that data breaches are very sadly in the healthcare industry much more common than you might like.
And the whole thing involves chasing down evidence, filling in questionnaires and forms, updating the same spreadsheet cells over and over again.
So no more staring at the ceiling at 2 AM wondering whether you've got the right controls in place or whether one of your suppliers has been breached.
But this Vanta solution uses AI as well, and it's the useful kind, flagging risks, collecting evidence, slotting into the tools your team already uses so you move faster, scale without the headaches, and perhaps actually get some sleep.
Go to vanta.com/smashing to find out more.
Could be a funny story, a book that they've read, a TV show, movie, a record, a podcast, a website, or an app. Whatever they wish. It doesn't have to be security related necessarily.
Well, my Pick of the Week this week is not security related. My Pick of the Week this week is a website which tickled me. Everyone's gone mad about AI.
Everyone's using AI left, right, and center. Are you bored with AI or are you horrified with AI, duck?
And as is the case often with these AI chatbots, it's not going to give itself away for free, right?
And it's on this particular site, you earn some credits before you can ask questions.
And the way in which you earn credits on youraislopbores.me is you can answer questions other people have posted to the AI.
I've been playing with this, Duck, so I've actually had great fun pretending to be an AI, answering other people's questions that they've been put into what they may assume is an AI.
So for instance, someone asked me, can you draw a strawberry? And I thought, well, yes, I can draw a strawberry.
So I did a sort of rough sort of Microsoft Paint style picture of a strawberry. And then I wrote the word strawberry, albeit I put about 15 Rs in it.
And sent that off to them and they were happy.
Someone else said, oh, I'm thinking of going to Japan this year before World War III ruins everything. Am I safe to go?
And I said, well, you don't say where you're going to Japan from. That would be a useful and relevant detail.
So I was able to answer all these questions and I was earning credit so that I could then myself ask questions of the AI.
I have to say, I find it really addictive pretending to be an AI answering questions.
I'm going to put, uh, Pisces, dollar, dollar, dollar, excess error 404. I see what you mean.
Then maybe you won't get a token, but I imagine you're just bashing the keyboard now, aren't you?
I've got 8 tokens already. Why do eyes exist? I can't hear you. Motivational quote for people whose only goal today is not crying. This is getting a bit weird.
Use AI rather than all these computers to do things.
Duck, what's your pick of the week?
And it's a chap by the name of Vaughn Shanks.
The headline on the site is: A judge has sentenced a CISO to 8 consecutive hours on the RSA Conference Expo floor.
His crime: failing to disclose a breach to the Securities and Exchange Commission of the USA within the mandated 4-day window.
Legal experts at the SEC are calling the penalty proportionate and corrective. Former RSA attendees are calling it barbaric.
Anyway, the bit that Vaughn Shanks added is an explanation of what the RSA conference expo floor is, because people may not know, right? And his definition of it is fantastic.
He says the expo floor, for the uninitiated, is 50,000 square meters of vendors who all do the same thing, none of whom can quite explain what that thing is, and every single one of whom has as of 18 months ago, always been an AI company.
The defendant is said to be in good spirits.
But sources close to the case warn that will change about 40 minutes in, somewhere between the third autonomous threat detection platform and the man offering to scan his badge just to send some resources over.
The sentence is believed to be the harshest handed down to a security executive since the SolarWinds incident.
What's the best way to do that?
And if you think I can create some fantastic content for you, whether it's written, spoken, or visual, please get in touch.
And don't forget to ensure you never miss another episode.
Follow Smashing Security in your favorite podcast app, such as Apple Podcasts, Spotify, and Pocket Casts for episodes, show notes, sponsorship info, guest lists, and the entire back catalog of around about 459 episodes.
Check out smashingsecurity.com. Until next time, cheerio. Bye-bye.
As members of Smashing Security Plus, they not only get episodes of the pod earlier than the great unwashed public, and ad-free episodes at that.
But they also get the chance to be pulled out of the hat and to be thanked here at the tail end of the show. So let's pick some of them out of the metaphorical hat right now.
First up, Marvin71, which suggests there are at least 70 other Marvins that they feel they need to distinguish themselves from. Frankly, I respect that.
A big hello to Elbow, which could be a name, it could be a joint. Could be a command you shout at someone who's hogging their armrest in the cinema. Not sure.
Watcher to MJ Lee and Lewis, who's decided one name is quite enough. Thank you.
Cheers to Travis West and to Heisenberg, who we are legally required to say we don't know and have never met.
A special welcome to one patron who's entered their name entirely in kanji characters and thus unpronounceable by this ignorant Englishman, but thank you anyway.
And finally, thank you to Karen Reynolds, as well as Alex Tasker and Richard Mortner, two names that sound like they belong in a very good detective novel.
If you'd like to join Smashing Security Plus and support the show, as well as get all of those benefits, just head over to smashingsecurity.com/plus for all of the details.
And I understand that not everyone can support the podcast in that way. And if that is true for you, do not fear.
You can still leave us a review or like the podcast, or best of all, tell your friends that you enjoy Smashing Security. Go on, encourage them to subscribe as well.
Well, that just about rounds off the show for this week. I hope you've enjoyed it.
Host:
Graham Cluley:
Guest:
Paul Ducklin:
Episode links:
- DOGE employee stole Social Security data and put it on a thumb drive, report says – TechCrunch.
- Foreign hacker in 2023 compromised Epstein files held by FBI, source and documents show – Reuters.
- New font-rendering trick hides malicious commands from AI tools – Bleeping Computer.
- Lockdown Mode – Apple support.
- Gone (Almost) Phishin’ – Matt Mullenweg.
- Listen to the Live Scam Call Targeting Matt Mullenweg’s Apple Account – YouTube.
- Confidential health records from UK BioBank project exposed online – The Guardian.
- A message from Professor Sir Rory Collins, Chief Executive and Principal Investigator of UK Biobank – UK BioBank.
- Psychotherapy data breach blackmailer sent to prison – Paul Ducklin.
- Your AI slop bores me.
- Post by Vaughan Shanks – LinkedIn.
- Judge Sentences CISO to 8 Consecutive Hours on RSA Expo Floor as Formal Punishment for Security Breach – The Exploit.
- Smashing Security merchandise (t-shirts, mugs, stickers and stuff)
Sponsored by:
- Vanta – Expand the scope of your security program with market-leading compliance automation… while saving time and money. Smashing Security listeners get $1000 off!
- Adaptive Security – request a custom demo featuring a real CEO deepfake simulation.
- Meter – Network infrastructure for the enterprise. Get a free personalised demo.
Support the show:
You can help the podcast by telling your friends and colleagues about “Smashing Security”, and leaving us a review on Apple Podcasts or Podchaser.
Join Smashing Security PLUS for ad-free episodes and our early-release feed!
Follow us:
Follow the show on Bluesky, or join us on the Smashing Security subreddit, or visit our website for more episodes.
Thanks:
Theme tune: “Vinyl Memories” by Mikael Manvelyan.
Assorted sound effects: AudioBlocks.
