
Renowned cybersecurity expert Troy Hunt falls victim to a phishing attack, resulting in the exposure of thousands of subscriber details, and don’t lose your life savings in a whisky scam…
All this and more is discussed in the latest edition of the “Smashing Security” podcast by cybersecurity veterans Graham Cluley and Carole Theriault.
Plus! Don’t miss our featured interview with Alastair Paterson, CEO and co-founder of Harmonic Security, discussing how companies can adopt Generative AI without putting their sensitive data at risk.
Warning: This podcast may contain nuts, adult themes, and rude language.
Show full transcript ▼
This transcript was generated automatically, probably contains mistakes, and has not been manually verified.
My name's Graham Cluley.
Now, coming up on today's show, Graham, what do you got?
Plus, we have a featured interview with Alastair Paterson, the CEO and co-founder of Harmonic Security, a firm which enables companies to adopt generative AI without risking sensitive data.
All this and much more coming up on this episode of Smashing Security.
I agree with you. I think even the most security-savvy folks can fall foul of a phish.
And that has happened in the last week because well-known Australian cybersecurity pundit and creator of Have I Been Pwned, Troy Hunt, has become the victim of a phishing attack.
And he was due just a couple of days later to give a talk in our neck of the woods, Carole, in Oxford at the Blavatnik School of Government, where he was going to speak all about some of the lessons he'd learned from processing 15 billion records of breached data for his Have I Been Pwned project.
Well, it looked like this on Monday, so I screencapped this on Monday before I visited some other friends in government in London, and it had 877 different data breaches in there.
Unfortunately, as of today, it has 878 because my mailing list got popped yesterday, which was really ironic because I'd spent the afternoon the day before with the NCSC talking about we really need to push unphishable two-factor authentication.
We really need to push passkeys. Let's get traction. How do we explain passkeys to normal people? And I'll go home and think about it.
I didn't think I'd be thinking about it this much.
Even if you're going super duper business class, even if you have the most luxury in the world, it's going to be pretty knackered.
And of course, Mailchimp are widely used, they're one of the market leaders.
And the email told him that his mailing list's sending privileges had been restricted due to a complaint of spam. Someone had said that his mailing list had spammed them.
And obviously Mailchimp thought, well, you know, we're going to have to prevent this mailing list from being used anymore if it is being used for spam.
And it told him that he had to review his account to ensure compliance with Mailchimp's policies in order to restore his sending privileges.
And then the webpage asked him for his two-factor authentication, one-time password, the thing which changes every 30 seconds or so, those 6 digits.
And he entered those as well, whereupon the webpage hung, which can happen all the time, of course. You know, you could be there on a laptop, it could be doing a Windows update.
So he went directly to Mailchimp's official website at mailchimp.com. He logged in and he changed his password.
16,000 records had been taken from Troy Hunt's mailing list of both current and former subscribers.
And he'd done the right thing in having 2FA, two-factor authentication, on his account, just like he'd been telling the NCSE and, you know, advising we need to recommend the use of two-factor authentication passkeys to harden accounts.
He was doing that all right, but he had entered those details onto a phishing page.
It can relay them to the genuine website, which means someone can log into your real account. And that is what they did in Troy's case.
They created an API key, they stole his mailing list, et cetera, et cetera, et cetera. Nasty, nasty, nasty.
What they wanted to do was create an API key, which meant that they could continue to access his account, that would have continued working even if he did later change his password, if he chose to do it.
It did act correctly because it didn't fill in his details automatically on the bogus page.
So when he went to a page which wasn't Mailchimp.com, it didn't offer to enter them, but he did it manually anyway. I guess because he was tired. And so he didn't take enough care.
Like you said, if you're under pressure—
He alerted people as quickly as possible. And he's also apologized. He said, sincere apologies to anyone impacted by this.
He says, on balance, he says, I think this will do more good than harm, and I encourage everyone to share this experience broadly. So I'm doing my bit. I do agree.
I think commiserations to Troy and his subscribers, because obviously this is non-ideal, but it can happen to anyone.
And he wasn't the only one to have their Mailchimp list popped, by the way. A listener got in touch with me a couple of days ago.
And he told me how the mailing list for the Thunderbird email client was also impacted, it appears, in the same way.
So people who use that email program and have subscribed to the mailing list could have had their details. And who knows how many others may have had the same kind of impact.
So it can happen to anyone. You don't need to be a big brand. For instance, last month, between the 19th and 21st of March, a group of students in France received a suspicious email.
How many students do you reckon received this email Carole? So a bunch of students in France. Were you thinking half a dozen, 20?
It promised— this is my translation— it promised cracked games and free cheats.
So the kind of things which you'd think many middle school students would be interested in checking out.
Now, according to industry surveys, that's actually quite low compared to the typical phishing email. So 1 in 12 actually suggests the students were quite savvy.
Now, I wonder if the figure is proportionally so low because the link was sent via email rather than Snapchat.
And do you know what happened when they clicked on it?
It was a part of an operation called Operation Cactus with the motto or slogan, "Don't get pricked by a phishing attack." Okay.
Pour gagner, je n'ai jamais eu besoin de tricher. Que du travail et de la rigueur.
Si tu as déjà été tenté de télécharger un logiciel ou une appli piratée pour tricher, sache que tu t'exposes à des virus. Je suis aussi gendarme. He's an esports champion.
You know about esports?
Just be good at— Just be really good. Just be good at games.
So the video also stresses to students the best practices to keep their digital lives safe, highlighted the penalties that courts could slam them with if they ever were to use underhand tricks to steal people's passwords.
'Cause that obviously can be a problem. Kids stealing passwords from other kids in order to do better at games and things, and they didn't want that to happen.
So yeah, in this case it was a phishing test, but either way, try and be a little bit more wary. Maybe take heed.
Maybe you have a little bit of money in the bank. That could be a reason. Also, maybe you feel like you've missed out on the opportunities.
You've heard about all these 20-year-olds earning millions through cryptocurrency. And you're thinking, oh, I'd love a bit of that. I wonder if I could make a fortune as well.
And it seems as the victim's age increases, so does the average loss. So Martin James, he's a consumer rights expert who works on the BBC's Rip Off Britain.
These are people also, as you say, have more valuable assets, maybe they probably have a house, maybe without a mortgage, a car without a loan.
And as you say, maybe have time and money to invest, right? Cruises to plan if they're very rich. Who knows? So the City of London Police, they agree.
They say investment fraud destroys lives and is of particular concern to the older demographic of the UK public.
Victims who are targeted are those with a healthy amount of savings who've put their hard-earned money away for a rainy day or to help support family and have been robbed of those opportunities.
So common scams that have been cited to have a particular penchant for the perhaps older individual, can you name a few, Graham?
This is fake investment opportunities promising high returns with little risk.
And it's just the professional-looking websites and the social media ads with, you know, a celeb and with a quote, and it lures you in to invest with them.
There's pension scams because think of our demographic, right? This is where scammers target individuals looking to access their pension savings or perhaps change it.
A lot of people do that, right? They're, I want to just make my nest egg or start a pension plan or move it over to a different investment firm.
A lot of people offer free pension reviews, high-return investment opportunities, or early access to pension funds.
And there's all these high-pressure tactics to convince victims to transfer all the pensions into this fraudulent scheme. It's just outrageous.
You work your whole life to build this little fund, and it vanishes when you need it most.
They will use the legit name, legit address, and legit firm reference number, FRN, for real companies authorized by the Financial Conduct Authority in the UK.
And then they create these very realistic-looking websites and documents that closely resemble those of the genuine companies, use the genuine names of investment managers and financial advisors, all in order to get you to sign the money over to them.
You just interact with it online. It just happens you're with the wrong website.
And it's the same thing— glossy brochures, professional websites, convincing sales pitches that lures in the victims.
And then, of course, once they part with cash, no further contact is made. They're gone.
Perhaps they love the amazing returns they were promised if they invested in a barrel or three.
Hundreds of people were duped into plowing their life savings and pensions into casks that were overpriced or didn't exist, while some individual casks or barrels were sold multiple times to different investors.
They could drink some of it and then just top it up with something else over time.
So the whisky market's popularity has grown rapidly because people talk about making huge returns on rare whisky.
So typically investors, these are legit investors, will buy a cask of whiskey when it's first produced and then hopes that it rises in value as the spirit ages in the barrel.
And it takes 3 years for a spirit to become Scotch whiskey in a cask. I didn't know that.
And investors are encouraged to keep barrels for up to 10 years or more to maximize the returns.
So legit traders are there doing this as a legit business for people that invest early, but a lack of regulation has enabled fraudsters to exploit the market.
So there's no central authority regulating or tracking ownerships of casks, so it makes it very difficult to verify claims as to whether the whiskey is legit or not.
You don't understand because you don't drink. I can hear the way you're going, hmm.
But I mean, certainly wine connoisseurs, they get very poncy about it all, don't they? Oh, lovely. And I imagine the whisky people do as well.
That's horrible.
My advice though is to watch out for unexpected investment offers. And especially if you're from the older generation, it can be quite exciting to get an email.
You know, I get a lot of emails and hide from them at every opportunity.
And I see people of maybe, you know, a few decades older than me, because they're not in that same system, if they get one message, it's quite exciting.
And of course, the final question is, if you happen to fall prey to one of these investment scams, can you get your money back?
And you might remember we talked about this a few years ago, but investment scams can fall into that category of authorized push payments.
This means that a scammer tricks the victim, or let's say me, into sending funds by bank transfer into an account that the scammer controls.
And these are referred to as authorized because the victim voluntarily sent the money.
And this means that any victim of a UK-based APP fraud may be able to claim their money back from their bank so long as they weren't grossly negligent. Now, grossly negligent.
But if you're in the UK and you have been scammed, you should certainly report it quickly as you might be able to get your money back if the payment was within the UK and less than £85,000.
There's a few contingencies. It's like small print.
They give security teams total visibility into how AI is being used across their orgs while making sure sensitive data never leaks into GenAI or AI-powered SaaS.
No complicated regex, no training on customer data, just instant, accurate protection.
Help your workforce embrace GenAI securely. Visit harmonic.security to learn more. That's harmonic.security.
Head to vanta.com/smashing to learn more. That's Vanta, V-A-N-T-A.com/smashing. And thanks to Vanta, for sponsoring Smashing Security.
Smashing Security is sponsored this week by the Acronis Threat Research Unit.
They're a dedicated team of cybersecurity experts inside Acronis specializing in threat intelligence, AI, and risk management.
Acronis's Threat Research Unit stays ahead of cyber risks to keep MSPs and their clients safe from attack, releasing security updates, threat intelligence, and monitoring the global threat landscape around the clock.
That's smashingsecurity.com/acronis. And thanks to Acronis for sponsoring the show.
It doesn't have to be security-related necessarily.
Have you heard of something called Adolescence, Carole, as a TV series rather than actually the word adolescence? Have you heard of Adolescence on Netflix?
It looks into what happens to this young man and the impact on his family and how unpoliced social media and the indoctrinating voices of — well, you know, online voices like Andrew Tate, I suppose, may have contributed to this horror.
Now, as I said, there are 4 parts to this.
I found part 1, where Jamie is arrested in a dawn raid and questioned at the police station, and episode 3, where he's interviewed by a child psychologist, really made a big impact on me.
Those were my favorite episodes. I'd be fascinated to hear what you think about it.
Jamie, this 13-year-old, is played by an extraordinary youngster called Owen Cooper, who is probably gonna win every acting award going.
And the other really compelling thing about this, other than the acting, which throughout the entire programme is unbelievable, is the extraordinary camera trickery, because each episode is recorded in a single take.
Suddenly, you realize you're flying up in the air, and they take you down somewhere again, all in one take. It's just extraordinary.
So, from the technical point of view as well, it's unbelievable. The acting is really, really very, very good as well.
But you are left thinking, I think, at the end of the program, "What are we gonna take from all this?" And it's certainly thought-provoking, I think.
I mean, I was conscious that there was a huge focus on the family of the accused young boy and the impact on them.
But there's nothing really about the victim or her family and what they're dealing with. I thought at one point they were going to switch to them for an episode.
What you do get is a sense of this violent rage that can be bubbling inside people, and sometimes it doesn't take much for it to come out. And—
And he said, "This is really good, Dad." And apparently, there have been pushes for classrooms up and down the country to watch this drama for what it's gonna say about knife crime.
And I think it puts a— Towards the end, there's a lot of onus on parents, how they should be doing a better job of bringing up their kids.
But you also end up thinking there's been a lack of investment over the years, I think, by the state as well. So, I think maybe kids have been left with nothing to do.
There aren't enough community centers. Kids are left hanging around street corners or on their computers rather than engaging with each other.
And that's a sort of thing which it doesn't really address. But it's probably one of the most impressive pieces of TV which I've seen this year.
And I think it will win many, many awards. Anyway, Adolescence, extraordinary television. It's on Netflix. And I would recommend it.
And it's responsible for the water supply and the wastewater treatment in most of Greater London and Luton, the Thames, Oxford Valley, Surrey, Gloucestershire, et cetera, et cetera.
It's the UK's largest water and wastewater service company servicing 16 million humans.
Is there any particular reason, any sort of crisis which has happened at Thames Water recently?
Or its brand reputation. Would you agree with that, Cluley?
And yeah, and it's being pushed harder and harder because there's all more people in the area to service. So anyway, so what does Thames Water do?
Well, the UK's largest water company has lifted the lid on what goes on behind the scenes in a two-part observational documentary which aired on BBC earlier this month.
Oh, the episodes filmed over six months followed Thames Water colleagues as they navigate the financial position of the business, work to improve company performance, and Graham reveal the day-by-day challenges when working on the front line and in the public eye.
Yeah, oh my God, this is gripping TV.
Because it was the brainchild, they announced this early on, of the corp comms director of Thames Water.
It's kind of heartbreaking when you see it. And I think most of the staff seem decent folk trying to do a good job. The CEO comes off as a bit of a posh twonk though.
The new statement said of the Thames Water CEO Chris Weston, he said, quote, Chris Weston, a cherry-faced former army man with bog brush hair who wears polo shirts and cocky jerkins and a leather band on his wrist à la Prince Harry.
And what a prize chump he is.
I mean, Thames Water have been quite rightly criticised because, you said, they have made millions and millions and millions of profits which have been doled out to the people up the top and the major investors, but they haven't invested.
How does a company like waterworks, of which we are all dependent upon, make money when the people that it services hates it?
So Smashing Security listeners, we are pleased to be speaking with Alastair Paterson.
He's the CEO and co-founder of Harmonic Security, and we're going to be talking about how one can maintain a security posture amid the influx of GenAI tools.
Alastair, welcome to the show. It's so great to have you here.
So I started out life back in the UK and I actually set up my first company there, Digital Shadows, which was in the threat intel space.
So I did that, you know, that was kitchen table, just two of us initially all the way back in 2011.
And then in 2015, I got on the plane to Silicon Valley, as you do when you're in tech and you hit a certain size. And we were fortunate enough to raise a Series A round there.
And I moved out in 2015, and I've been in the Bay Area ever since. So just coming up on 10 years actually here. Digital Shadows had a really great journey.
I mean, we grew up to about 500 customers globally. We were about 160, 170 employees when we got acquired back in July of '22. And so that was a great journey.
I learned a ton along the way and I thought I was going to take a bit of time out after that. But yeah, then, you know, July '22, we get acquired.
November '22, ChatGPT came out and the world changed.
So yeah, I couldn't sit on the sidelines for long and my brain just couldn't help but start ticking into, you know, number one, all the risks, of course, that are going to come from this GenAI revolution.
But secondly, what can we use it for, right? How can we apply this magic technology to make companies more secure?
So this time, instead of the kitchen table in London, it was a fast start off to the races here in the Valley for Harmonic.
And employees from the finance department all the way to development and marketing, they all want to use an AI tool for something to make their lives more efficient, effective, make their jobs faster.
And correct me if I'm wrong, I think this is happening across all industries, right?
It's really that tension between the employees and the business wanting to adopt GenAI as fast as possible, but see, you've got privacy, security, compliance, legal that are all just saying, well, hey, you know, let's not lose our shit here while we're leaning in.
So yeah, that's the tension exactly where we sit is in that pivot point between the two.
So how is the infosecurity guy keeping the network and the perimeter and the ecosystem secure?
Even pretty mature large organizations we work with typically don't have a great handle on what's being used where and where their sensitive data is going, more importantly.
And so part one is just understanding that picture. There's obviously a whole lot of risks around GenAI adoption, and you can go and look at any number of frameworks.
There's feels like there's hundreds of them out there now that will tell you about AI risk.
But I think when you really boil it down, the number one risk comes back to worries about sensitive data leaking out of the business and going somewhere they shouldn't go.
So whether that's your IP becoming part of someone else's training dataset, or frankly just being stored insecurely somewhere, or it's employee data and the compliance challenge.
They usually don't have a great handle on what's happening.
Part one, when they're worrying about their data leaking out, they turn to sometimes their existing tools if they have DLP.
That sort of three-letter word's usually a four-letter word in our industry, of course. So yeah, I mean, it's false positives everywhere.
It's a nightmare for the security team, it never really works, and we've been doing it for 20 years.
This idea that you can use regex and rules to spot sensitive data kind of works for PII, but not really, and credit cards and not much else. So that's one nightmare.
The other nightmare is to try to label all the data in the business, which again has been around as a concept for a long time, but I think it's just never worked, right?
We keep trying it, but finding all the data is hard enough, let alone labeling it accurately.
So what we typically see there is probably two-year programs that get spun up that never complete and cause a lot of pain and friction for everybody.
The other option we see is there's a new brand of AI firewalls that have come out, and there's quite a few of those that are pretty good at the visibility piece.
So they'll show you what AI tools are in use and give you some level of governance.
But where they struggle is on the data protection again, because they're using the same DLP that we're using in the gateways that we're using on endpoint, and we know it just never really works.
And so the final option that we see probably most often, we call sit, block, and wait.
Whether it's Microsoft, it's OpenAI, it's Google, and they've got a safe AI that they try to point all the employees at and then they try to block everything else.
Maybe it's the categorization that they get from their gateway or something like that. But of course, that, you know, I think it's a reasonable intermediate step.
But it's obviously a short-term measure here because employees don't just want to use the sanctioned AI, right?
They're at home getting used to using all kinds of tools and they want to do the same in the workplace.
And there's this, as you mentioned, it's not just about the big, you know, 5 or 6 foundational models out there.
There's tons of other great AI tools now, whether you're building a slide deck or doing your finances or legal.
I was talking to a CISO recently who had employees emailing things to themselves, to their personal email, to run them through generative AI apps to then fire them back into the business over email, just getting around all the controls that way.
And we, you know, it's tough. We shouldn't be forcing employees that are just trying to get the job done to deal with it that way around.
But the job's difficult because you've got to balance that against security, right? So, it's a really tricky job, especially with a new technology.
So, how do you, Harmonic, help them? What can you give them?
And the reason for that is we're trying to be the easy button in this space where there isn't really an easy button in data protection.
And so we start off by, yes, giving you the visibility into all of the GenAI adoption and where the sensitive data is going.
The real differentiator for us is we've actually built our own small language models for data protection. So there's more than 20 of them.
They're not going to do the kids' homework or write Shakespeare.
But what they are really good at is identifying sensitive data exceptionally accurately and with all the business context around it as well.
You can think of it as a bit like having a smart human looking at all the data leaving the business and figuring out where the sensitive stuff is.
Across our client base, which is made up of a lot of large enterprises now, across US and Europe.
We've analyzed all the sensitive data that we see, and it's actually 8% of all of the prompt data has some sort of sensitive business content in it.
And whether that's customer information, IP, legal documentation, anything like that. And it really is a significant problem.
The beauty of Harmonic's small language models is that we can not only stop things like PII and identify credit cards and things like that, which you could do to some extent before, but with the model-based approach, we can detect all kinds of unstructured IP as well.
So we've recently been rolled out to one of the global chip designers, for example, looking after their IP and sensitive data.
We're doing the same for a large automobile company and a lot of tech businesses and financials.
So, and because we have these models and we're so accurate, we can, instead of making the security team's life a nightmare by firing a ton of false positives over, we can actually resolve these issues directly with end users at the point of data loss.
So at the moment, they're about to expose the company to some sort of sensitive data incident.
We jump in the middle, we stop the data leaking out, and we coach and nudge them towards safe alternatives and behaviors.
So instead of having to just try and block everything, you put Harmonic in the middle, we'll save the company from any issues without having to just block everyone and force them around existing security controls.
But also the end user is not just being caged. Yeah, I like the idea of leaning them in the right direction. It's almost like cybersecurity training, really.
We didn't even realize. That's cool, right?
Let's enable them to do that with an enterprise agreement around one of the tools that they're using that's safe then, and we don't mind our company data going into, or whatever it might be, right?
Same for coding assistants for the engineering team, the same for sales and go-to-market tooling and all that kind of stuff.
But your real pain point can be on occasion the C-level, the C-suite, because they may not be as au fait with all the things that you might require to secure, and they might be very, very excited about GenAI tools.
So is there any kind of reporting within the tool or anything that you do to help that bridge between the C-suite and the infosecurity kind of be able to prove the point of the importance of it?
And then most of them have some sort of steering committee for AI and security is always represented on there.
Sometimes it even chairs that committee, which is slightly surprising to me, but they're very involved.
And I think AI is the opportunity to make the security leaders the kind of the superheroes in the business by making them the enablers.
And so typically what we see is the security team gets given the controls responsibility for the policy. And historically, there's no great controls.
But if you roll out Harmonic, we try to make the CISO and the security team the superheroes here in this movie because we can show them, we can give that visibility to the whole business in terms of which AI is being used by which teams, you know, where are the risks, who's using personal accounts and free editions of tools instead of the corporate editions that we'd like them to use.
But it's that kind of strategic view of AI adoption.
And, you know, maybe we've sanctioned a couple of tools, but are people actually using them or are they using some other third parties that we didn't even know about? Right.
And so that picture is not interesting just to security. It's interesting to the rest of the business and the rest of that AI steering committee.
So we give that reporting, that visibility and that assurance to the security team that they can then bring to that AI committee and have something to talk about and kind of show the rest of the company that they're on top of this and acting as an enabler for the business.
And that's our primary goal.
And I just feel really bad for these security teams and leaders because they have so much on their plate on any given day anyway.
And then the idea that they've now got to wrap their arms around this, you know, one of these 200 different frameworks for AI security on top of everything else is pretty wild.
I think the cool thing with Harmonic is you can roll this out in 30 minutes.
And also don't take the heat from the business that wants to adopt these tools, you know, be the hero and enable them and give them that visibility.
With the right controls around it. You know, bring that security policy to life for once and instead of it just being sat there on the shelf.
So yeah, zero-touch data protection is the way that we go about that.
I totally feel the pain for them and the relief.
And don't forget to ensure you never miss another episode. Follow Smashing Security in your favorite podcast app, such as Apple Podcasts, Spotify, and Pocket Casts.
It's their support that helps us give you this show for free.
For episode show notes, sponsorship info, guest lists, and the entire back catalog, of more than 410 episodes, check out smashingsecurity.com.
Hosts:
Graham Cluley:
Carole Theriault:
Episode links:
- A Sneaky Phish Just Grabbed my Mailchimp Mailing List – Troy Hunt.
- Thunderbird breach notice.
- Opération Cactus – Le Groupement d’Intérêt Public Action contre la Cybermalveillance.
- Cancer patient lost life savings to whisky barrel scammers – BBC.
- How to spot an investment scam – Saga Money.
- More than £612 million was lost to investment fraud in the UK last year – City of London Police.
- Adolescence – Netflix.
- Behind the scenes of Adolescence – YouTube.
- Thames Water: Inside the Crisis – BBC iPlayer.
- Who let the BBC inside Thames Water? – The New Statesman.
- Smashing Security merchandise (t-shirts, mugs, stickers and stuff)
Sponsored by:
- Harmonic – Let your teams adopt AI tools safely by protecting sensitive data in real time with minimal effort. Harmonic Security gives you full control and stops leaks so your teams can innovate confidently.
- Vanta – Expand the scope of your security program with market-leading compliance automation… while saving time and money. Smashing Security listeners get $1000 off!
- Acronis Threat Research Unit – Your secret weapon against cyber attacks. Access the reports now.
Support the show:
You can help the podcast by telling your friends and colleagues about “Smashing Security”, and leaving us a review on Apple Podcasts or Podchaser.
Become a Patreon supporter for ad-free episodes and our early-release feed!
Follow us:
Follow the show on Bluesky, or join us on the Smashing Security subreddit, or visit our website for more episodes.
Thanks:
Theme tune: “Vinyl Memories” by Mikael Manvelyan.
Assorted sound effects: AudioBlocks.


