
Welcome to the largest educational data breach in history – affecting nearly 9,000 institutions, every Ivy League university, and 30 million students mid-finals. When Canvas’s parent company refused to pay and announced they had deployed “security patches” instead, the hackers were less than impressed. So they came back through the cat flap.
Meanwhile, a famous finance expert’s face has been showing up on Facebook adverts promising hot stock tips and exclusive WhatsApp investment groups. Spoiler: it isn’t him, the tips aren’t real, and you’re about to be scammed.
Plus we chat to Mike Nichols of Elastic, about how the SOC isn’t dying, attackers and defenders are both deploying AI agents, and how the real security crisis is no longer human users – it’s the bots acting on their behalf.
All this and more in episode 467 of the “Smashing Security” podcast with cybersecurity expert and keynote speaker Graham Cluley, and special guest Danny Palmer.
Show full transcript ▼
This transcript was generated automatically, probably contains mistakes, and has not been manually verified.
How shiny hunters hack the world's biggest universities with Graham Cluley and special guest Danny Palmer. Hello, hello, and welcome to Smashing Security, episode 467.
My name's Graham Cluley.
So right now things are ramping up for Infosecurity Europe, which is in about a month's time. And yeah, it's getting really, really busy.
Turns out putting on a conference is a very hefty task.
There are loads of people to meet, loads of talks to see, networking, that sort of thing. And yeah, interesting keynotes this year from various people.
I'll be seeing it from the other side of the fence this time, as it were.
So I'll be there at the Infosecurity Magazine stand rather than just pottering around and doing what I want to do myself.
So on the Excel in the first week of June next month, I think currently the sign-up is still free. You don't have to pay anything.
I think if you sign up after about middle of May, you have to pay the grand total of about £49 to sign up. I think it is these days.
So if you're intrigued about that, come along and find out more. Well, before we kick off, let's thank this week's wonderful sponsors: Elastic, CoreView, and Vanta.
We'll be hearing more about them later on the show.
This week on Smashing Security, we won't be talking about the water company that failed to notice for almost two years that it had been hit by the Clop ransomware gang and how it's now been fined almost £1 million.
You'll hear no discussion of how a US bank has reported itself to regulators after uploading large amounts of nonpublic information about its customers to an unauthorized AI application.
And we won't even mention how hackers are abusing Google Ads and Claude AI to push malware onto Macs. So Danny, what are you going to be talking about this week?
Plus, don't miss our featured interview with Mike Nichols of Elastic Security on why the SOC isn't dying, attackers and defenders are both deploying AI agents, and how the real security crisis is no longer human users, it's the bots acting on their behalf.
All this and much more coming up in this episode of Smashing Security. This week's episode is supported by Vanta. Joe, what's your 2 AM security worry?
Well, enter Vanta. Vanta automates the manual misery so you can stop sweating over spreadsheets, chasing audit evidence, and filling in endless questionnaires.
That's vanta.com/smashing. And listeners, you can get $1,000 off.
And you've not slept properly for about 11 days, which frankly is a bit like being a cybersecurity journalist, I think.
30 million users. There's 8,000 institutions relying on this service. But Harvard, Princeton, Columbia, Georgetown, Duke, Virginia Tech, they all rely on Canvas.
And you log in to grab your study notes or to check your grades or to submit the assignments you finally started at 3 o'clock this morning.
And instead of your normal dashboard, what you see is a black screen rimmed in ominous red.
For them, it's all emojis. It's all rhubarbs or aubergines or—
I was at university at that point where it was just on the cusp of becoming digital in sort of the mid-noughties. But from what it sounds like, a lot of it is now online.
With what it sounds like a bit of a monopoly on this platform of how universities do things, which seem to have turned out not very good, it seems.
I mean, this is by some margin, apparently, the largest educational data breach in the history of educational data breaches. And there've been a few.
So Shiny Hunters, we always talk about Shiny Hunters.
Apparently the shiny Pokémon are the rarer Pokémon.
That is the name of a shady information sharing network in the sci-fi RPG Mass Effect. So yeah, a lot of them seem to get names from these sort of things as well.
It's almost as if there's a certain type of person that is engaged in this sort of activity.
Around 275 million records from nearly 9,000 institutions, not only across the United States, but the UK, Canada, Australia, New Zealand, et cetera, et cetera, including allegedly every single Ivy League university.
And it's not just student IDs and email addresses, but there are also apparently several billions of private messages between students and teachers, which was sent via the system.
Now, I was wondering, well, what kind of messages might students have been sending their teachers and professors?
And remembering back to when I would communicate during university times, you know, I imagine there's a fair percentage of them which are "my dog ate my homework."
And so my assignment hasn't been finished.
So they revoked the access, they called in forensics, digital forensics, and on May 1st, they put out one of those carefully worded statements.
If you went to the pub and said to your friends, "I was hacked by a threat actor," they wouldn't know what you're talking about.
Oh, good. And two days later, they let the affected schools know about it, and they confirmed, yeah, names, emails, student ID messages got out.
Shinyhunters demanded a ransom, they gave a deadline of May 6th, basically, the usual story, which is pay up or we're gonna leak it.
They don't just ransom your stuff, they also will blackmail you as well, you know, because they are efficient, I guess, if you can say that.
And instead, what they did was they announced that they had deployed what they call— this is a technical term, Danny.
I know you're a technical cybersecurity journalist, just to brace yourself for this one. They deployed what they call security patches, apparently. Have you heard of such things?
Apparently this is what they did.
But I'm not sure if that's the response to a ransomware incident.
Because it seems to have riled them somewhat because at lunchtime Pacific on May 7th, right in the middle of the finals, when impact was going to be at its worst.
Oh, instead of contacting us to resolve it, they ignored us and did some, and then they put in quotes, security patches rather mockingly. Clearly they were not impressed.
So this is the cybercrime equivalent of breaking into someone's house, getting kicked out, you watch someone put a little Yale lock on the back door, and then you come in through the cat flap, piss all over the floor.
Well, I suppose in one way, the company hasn't tried to, they haven't negotiated with the attackers to pay the ransom, which I suppose is to be applauded, but.
And now we know how they got back in because Instructure has had to admit that the vector for this second attack— oh gosh— was an issue related to their free-for-teacher accounts.
So these are accounts which are handed out by Canvas free to any educator who wants to mess about with the platform.
So you don't need to be affiliated with any institution, there's no verification.
So it's free as in beer, free as in puppies, free as in Nelson Mandela, free as in free access for any cybercrime gang who fancies a poke about.
In short, the backdoor was held open with a little wedge labeled Teachers Welcome. So how did Instructure fix this problem with the free-for-teachers account?
So on the Friday, they issued a statement saying, we've made the difficult decision to temporarily shut down our free-for-teachers account.
This gives us confidence to restore access to Canvas. So, I mean, obviously a very difficult decision for them.
Difficult as in not very difficult at all, because they decided to close the window that the burglar kept on coming through.
There's a student called Brianna Bush. And she'd actually been filing her own article. I dunno if it was for a student newspaper or something about the Canvas breach the week before.
So she filed the article, she opened her laptop. Oh no.
To submit her work for her finals, instantly saw the ransom note, thought, crikey, you know, she says, my jaw literally dropped.
Clicked refresh, and then she saw it said, currently experiencing maintenance. So down for maintenance, which of course is one way to hide, I guess, the ransom note.
Arizona State just stopped everything basically as a consequence. Gizmodo said students were experiencing a waking educational nightmare.
And of course, all of this was perpetrated by shiny hunters, the Pokémon fans, who we believe generally is accepted that this is a loose affiliation of teenagers based in the United States and the UK.
And they've been causing huge problems everywhere.
An obvious question now is, well, has Instructure, the parent company of Canvas, now actually paid up or not? Have any of the schools paid a ransom? That's an interesting one.
Is, if an individual school pays, do they get their access back or is it just the parent company? I wonder.
Would we potentially be liable if some of this information turns out to be sensitive? I mean, large part of this is happening in America. They are rather legalistic, aren't they?
Yeah. First thing they do is call the lawyers. God.
And obviously you feel quite, you know, feel quite bad for the students who are hit by this, because if they are preparing for an exam, which is actually happening on this day, and suddenly it's not at very short notice, that's an issue.
Students had quite a hard time last few years really, because you had this. Oh yeah. Then you've had the whole COVID thing.
I couldn't imagine going to university and just doing it all from behind a laptop screen.
The May 12th deadline is, by the time you're listening to this, it's either looming or it's just whooshed past.
What's interesting is that Canvas has been removed from the Shiny Hunters extortion page. Hello folks, Graham from the future here interrupting Graham from the past.
And the reason why I'm doing this is because since I recorded the show with Danny, there has been a development in this story which I'm able to insert just before publication.
Instructure, the company behind Canvas, has now issued a statement confirming that it has reached an agreement with the Shiny Hunters gang that was extorting it.
They say that the hackers have returned the stolen code to them. They say that they have received digital confirmation that copies of the data was destroyed.
Because of course you can trust those. And they've also been reassured by the hackers that none of its customers will be extorted as a part of the incident. Hmm.
Well, let's just hope we can trust criminals that they're fine, upstanding individuals whose words can be trusted, eh?
There's no word on how much Instructure has paid for this assurance, and there's also no mention as to whether the data won't be sold on to others who might use it for the purposes of identity theft and fraud, which could be a little bit of a loophole in the agreement, perhaps.
Anyway, sorry for interrupting. Then let's travel back to the past. I'll just give the old time rotor a kick and here we go. But there are obviously lessons here, right?
So one lesson is saying we've contained the incident. That's a very brave statement to make, isn't it?
If you give anyone in the world an account on your production system with no verification, this free for teachers thing, just ticking a box and yeah, I'm a teacher.
What you actually had was a free for anyone with a web browser, free for anyone, which includes that small proportion of people who might be interested in scurrying off with terabytes of your data.
Well, yeah, because it's been abused by a very tiny percentage of people, it's now closed to everyone. This is why we can't have nice things.
That's what people say, isn't it, about this sort of thing? That's right.
And of course, if a ransomware gang gives you a deadline and you respond with security patches, make sure those patches are really doing all of the job necessary to make sure that those hackers can't get back in, because in this case they kept on coming back.
Now, time for a quick word from our friends at CoreView. Joe, quick question for you. How confident are you in your Microsoft 365 security posture?
You've got your coffee, you're wearing your second best hoodie.
You're feeling pretty good about your Microsoft 365 setup because you checked Purview, you tightened conditional access, and frankly, you deserve a biscuit.
Turns out some quiet little permission that crept wider over three years. A policy exception that nobody had reviewed, the kind of thing that's invisible until it isn't.
It's the drift, the exceptions, the little permissions you stopped looking at because, well, you assumed they were fine. And the spoiler is that they're often not.
And if you like a hand setting it up, their team will happily walk you through it.
So all you've got to do is visit smashingsecurity.com/coreview to download your free copy of the tool, and even you will be able to answer the question, how secure is your Microsoft 365 tenant?
And thanks to CoreView for supporting the show.
It is literally in the job title.
Conveniently, one of them who is regularly seen in the media, on television, online, in newspaper articles, was promoting themselves in an advertising campaign on Facebook, on social media, offering you expert insights on how to make money on the stock market.
It all sounds rather good. I've never invested in stocks, but if you wanted advice on how to do it, I imagine, yeah, the place you'd go to would be a financial expert.
Because of course, why would you take advice through secondhand television spots or their articles in the newspaper when you can get direct tips from the experts themselves? Yes.
I mean, they're there in your WhatsApp. They've asked you to join their exclusive WhatsApp channel to receive these updates.
Well, yeah, I think you might have twigged here that this isn't all quite what it seems. It's a big old scam.
Which has been detailed by researchers and fraud analysts at cybersecurity company Group-IB. For starters, this financial expert isn't even involved in the scheme at all.
I mean, we're all shocked. I know. So this is a well-known legitimate financial expert.
So the researchers don't name who it is, but as part of this scam, there are deepfakes being used in promotional videos saying, hi, I am so-and-so, and I have these great financial tips for you.
I mean, if they've been on TV and radio and things a lot, you can quite easily create a deepfake these days. So it's drawn people in.
Right now, Graham, some ne'er-do-well listening to this could be thinking about creating scams based on the voices or likenesses of you or I.
They'd probably claim to be offering some sort of cybersecurity advice in exchange for bitcoin or something.
Just imagine being the person who has to piece together our voices and our faces to make us look as though we're not stumbling over our words, that we're actually able to communicate effectively.
Americans would always comment on my teeth. Way to stereotype us, guys.
Anyway, these adverts— which often remained active for only a few hours on social media platforms like Facebook— promised high-quality stock recommendations to anyone who went to click through to this advert, and those who did were encouraged to join a private WhatsApp group, which they were told was run by this financial expert.
I'm sure that's all they want to do, financial experts, give away their advice in their personal time to randoms on the internet to add their phone number to.
They weren't gonna be helping the people joining this group. They just wanted to make money themselves.
So once part of the group, users received instructions on what stocks to buy, and this was all on the legitimate trading platform, which is not named.
So they were in the WhatsApp group, they were told to use—
But they're using this legitimate trading platform, they're using these social media posts, two separate platforms, add in WhatsApp as a third separate platform, and say buy the stock and wait for instructions on when to sell it.
With the idea that when you sell it, it'll be worth more and you can go, okay, great, I've made some money on the stock market.
On first look, this appears to be great financial advice. The group was full of people posting about how they'd made money from this because their stock prices went up.
These people didn't exist.
They were fake profiles run by the ringleaders of the campaign who were in this WhatsApp group just to generate trust in the system and perhaps maybe helpfully drown out comments of anyone who might be suspicious that this might be a big old fake.
They tell people to buy these stocks of these companies. That drives the stock price up.
But then when it has reached much higher value, the attackers sell their stock at the peak for a significant profit and crash the entire stock.
So the investors essentially lose everything they put in while the scammers can walk off with thousands or potentially even millions, depending on how many people they've roped into these scams.
Yeah, of course, it's a financial scam and you can't have financial scams without involving cryptocurrency and bitcoin these days.
But no, the platform only accepts payments being made by cryptocurrency. So I presume Bitcoin. Are people not using the Melania coin?
It looks like major online financial platforms, complete with live feeds of financial information, but they're ultimately fake investment platforms.
So the users who've been redirected to these, they are first of all invited to enter their details to pass compliance check to verify their identity.
Which I guess if you're scammers, you're going to take that and store that away for a rainy day.
So yeah, on the top of the cryptocurrency scam, there is this element as well.
So they are told to make their deposit into the platform, a platform which suggests that any investment they make has very quick returns.
So it was just, you know, you'll make a return on this, you know, every single day almost.
And it even allows the users to make small withdrawals of the cryptocurrency they've put in, in order to, I guess, have that legitimate air about it.
But if they say, oh, okay, wow, okay, I put in 10 Bitcoin, it now says it's worth 15 Bitcoin, I want to take that out.
Ooh, the site suggests, no, we can't do that right now, I'm afraid.
It claims that the users need to do things like fill out these forms to pay tax, or there's additional charges you need to pay, or there's the classic technical error, which means you can't do anything right now.
Doing some maintenance at the moment, so you can't withdraw your cash right.
"Come back tomorrow." Yeah, unfortunately the crypto scammers have been ransomwared and they can't do anything about it.
But ultimately it keeps going around in circles and doesn't allow the user to withdraw their money.
Despite all this effort put into making this legitimate-looking site, it's a short-term thing, and the investment platform, Virtuconage, simply disappears.
You go to log in one day and it just isn't there.
Essentially, the attackers have come in, taken the cryptocurrency they've been paid, and they run off to start the whole process again.
Of course, being scammers, this isn't the only thing they do.
As well as stealing their cryptocurrency, as well as stealing personal details, they could also been seen to pose as a recovery firm, inverted commas, to help people get their money back.
Oh no. And this just involved scamming them for more money before disappearing again.
So you could have been scammed 3 times over at this point, which is, again, it's cybercriminals just preying on people who are unaware about things.
And then bing, up pops someone who says, we can help you. And in fact, they are just the scammers in a different guise.
Why aren't Facebook and Instagram and the others, why aren't they doing more to prevent these scammy ads from appearing?
These ones which are taking other people's images, other people's profiles are being used to trick people into making dangerous investments.
But you know, yeah, as you point out, a large part of this is through the same ecosystem. No, Meta control Facebook and WhatsApp.
I'm sure they probably do take down some of these pages that get identified.
Why can't they spend some of their billions protecting the social media spaces which they own as well?
So whenever you saw the blue tick, you knew this was likely to be absolute nonsense, which was being posted up there.
Originally, it was meant to show verified users, of course, but after a while you learned, oh no, those are the people I shouldn't pay any attention to.
But this is the future we've apparently chosen. I'm afraid it is.
So your team logs into tool 1 and then maybe tool 2, then into the thing that doesn't quite talk to either of them. By which point, whatever was happening has happened.
Pick of the Week is the part of the show where everyone chooses something they like.
Could be a funny story, a book that they've read, a TV show, a record, a podcast, a website, or an app, whatever they wish. Doesn't have to be security related necessarily.
Well, my Pick of the Week this week is not security related. My Pick of the Week this week is about a French academic called Florian Montaglier.
And he is seemingly one of the world's most ambitious self-promoters. Because in 2016, Florian won the gold medal of philology. Are you familiar with philology?
So, winning the gold medal of philology is quite a big deal.
And the ceremony where he was invested with this award was held at the French National Assembly, and government ministers turned up, Nobel laureates showed up, local papers reported he was in the running for the linguistics equivalent of the Nobel Prize, and he got it.
Got the gold medal for philology. Now, there's only one tiny problem with this.
The University of Philology and Education in Lewes, Delaware, where he claimed to have got his PhD, doesn't exist.
Yeah, he's probably just forgotten about it, dropped it down the back of the sofa.
Florin Montecler is now accused of suspected forgery, use of forged documents, impersonation, and fraud generally. He denies any criminality.
Apparently, his view is that the medal isn't a forgery because he says a forgery implies that there is a genuine medal.
But as the genuine Medal of Philology doesn't exist, his medal can't be a forgery.
So anyone basically can go and order online a Best Podcast in the Universe Award, give it to yourself, and hold your own little ceremony quietly at home, or invite people from the aristocracy or the world of politics and journalists, give out a few drinks, few vol-au-vents, and off you go.
Anyway, that story is my pick of the week because it rather tickled me. But seriously, it is a great piece of research that the Romanian journalist did to uncover all of that.
So well worth checking it all out.
So there's at least a few others in the running. Danny, what's your pick of the week? So my pick of the week is a book called A Very Short History of Life on Earth by Henry Gee.
That's G-E-E. It's not just a single letter surname like some sort of cool person.
But yeah, he is a paleontologist and a science writer, and it does exactly what it says on the tin, really.
In about 220 pages, it's a history of the Earth of life on it from when it was first formed until today. And until, well, even posits on a future scenario, which I'll end this on.
I mean, you go, it starts off with sort of the, I mean, essentially the formation of the Earth, which is obviously billions of billions of years ago, and life only started really as I've learned in this, cell life forms about a billion years ago.
But there's a lot of sort of coincidence in it as well.
It's like, no, we're only here because the Earth formed in the place it did in the solar system, survived a collision of another planet, which became our moon, which the gravity of that involved things happening to create life.
Something I hadn't thought about really is at one point evolution decided, okay, this is the front end of a cell and this is the back end of a cell. Right.
Which was apparently was a massive turning point for life.
I'll put it this way, an entry and an exit for those sorts of things, which then sort of literally made us move forward because we had a direction of travel now. Yes.
It's not all mouth-based. It goes through to the invention of the jaw. There's a few mass extinctions along the way.
It's only about, yeah, two-thirds of the way through the book you actually get to the dinosaurs, which just shows the scale of times it's showing you.
I mean, it's not something I learned from this book, but something I enjoy is in terms of the scales of time, we as humans are closer to Tyrannosaurus rex than Stegosaurus ever was, because that Stegosaurus existed 130-odd million years ago.
Oh wow. And there's more time between that and Tyrannosaurus rex, 65 million years ago, than there is between Tyrannosaurus rex and us.
You go through to the end of the dinosaurs and how mammals evolved through to basically how apes and Homo erectus, Neanderthals, all evolved and that sort of thing.
And you basically end up with not just us, we know, with basically the end of the book is us evolving and creating civilization, which is— this sounds all very good, probably is for us in the short term, but the book then goes on to posit how basically humans are all going to be extinct in about a million years.
So I guess we might enjoy the time as we've got it. It's an interesting book.
It's quite existential as well, because you got that thing about humanity probably ceasing to exist at some point, and be it due to climate concerns, some sort of ice age, or another catastrophic extinction event.
Quite an existential read. Some of that might come from the fact that I'm turning 40 this month, so I'm thinking about, thinking a lot about age and that sort of thing.
We worry about all the problems in the world. This book basically suggests that in the end, none of it will actually matter.
So you are seeing what's actually happening inside organizations, not just what people are talking about.
And one of the loudest things being said at the moment is that security operation centers are on the way out and that AI's gonna replace them.
Oh geez, and this is gonna be out of a job.
I think it'll actually make us finally successful 'cause we've been battling the same problem for at least 25 years that I've been doing this.
You know, the buzzwords you've all heard before, right? Alert fatigue, retention challenges, skills gap, all the problems that we talk about. We've kind of dabbled in technologies.
Maybe machine learning will help. All that did was create more alerts.
You know, maybe automation and these playbooks will help and they were brittle and broke and created more work there.
I think AI finally is a capability that will allow us to accelerate and still not surpass, but at least catch up a bit to where the adversaries already are.
Are we just speeding up the alert overload problem instead of fixing it? Is more AI creating more alerts?
And I think sometimes that's a bit of bringing a sort of a sledgehammer to a thumbtack, right?
Sometimes you don't need to use an LLM to find some things, but AI can, of course, create more detections. I mean, it's creating things in all other industries.
Funny kind of anecdote is, you know, for recruiting, right?
I try to open a rec and my recruiters now actually have to slow how fast they open recs because they get flooded with bots that are applying now, right?
They're not even real humans and we have to sift through that noise as well. So I think AI is flooding for sure and it can create more alerts.
But I think what's important about a security operations center is I wouldn't say dying.
I think I would call it reshaping or restructuring because what we push a lot for is this to not think of alerts as each one being individually actionable anymore.
Think of them more like interesting events, and then you want to run a secondary analysis on those with these autonomous agents to then surface what matters to the human.
So if you actually look at the classic model of a security operations center, it's always been structured like a pyramid or like a triangle.
We had this massive base of these Tier 1 analysts, you know, junior analysts.
They were new into the business, tasked with very little responsibility, pretty much just triage the alerts all day.
When you find a challenge, you build a package and then you would elevate that up to the next tier. Well, now a lot of that boring work can be automated.
That's where we believe AI really has a huge play, is taking a lot of that work out and simplifying it.
And then you can elevate those Tier 1 analysts to do more of that Tier 2, Tier 3 work where they can actually look at that evidence package, look at the developed kind of summary from an AI agent and make their analytical decision based on what they know about the business.
And so it actually, I think, allows our teams to get much more focused on what's the business or the mission of the company, what's the threat adversaries that are targeting that company, and less about sort of the vendor experts of a malware triage all day long.
Because they've got their hands on the same tools as us. Have you seen any particular clever uses of AI by attackers?
Endgame was really focused on sort of nation-state, government-focused attacks based in Washington, DC. We had a strong kind of focus in the US government.
These very, very targeted sophisticated attacks.
You know, cubicle farms of adversaries would spend millions of dollars and hundreds of people hours to build that one extremely important exploit that would take advantage of a system for compromise.
And so what happened is CISOs around the world kind of understood that they typically weren't going to be patient zero of those types of sophisticated attacks.
They had to worry about the commodities, you know, phishing, ransomware, but these very, very targeted attacks that we'd see in the news, usually they'd see somebody else get hit by that.
And then, you know, in their ISACs they're part of, or, you know, they could actually learn about it or the products would add detections for it and they'd be secure.
Smashing security, right? Unfortunately now, because adversaries don't have a legal regulation problem or a risk problem, so they put AI in use instantly, right?
They didn't worry about PII. And so they said, hey, look, let's just turn it on.
And they really developed an amazing pipeline of things like, you know, I think it's 4.5 times better click-through rates of phishing-based attacks now that are built through LLMs because all the hallmarks of finding that don't exist anymore, right?
You don't see typos, you don't see weird grammatical errors. They're also very targeted. But scarier than that, the ramp in discovery of CVEs.
We've seen, you know, CVEs are, you know, common vulnerabilities and exposures, these, you know, things that are where the software vulnerabilities are then lead to exploits.
Every month is a record-breaking Patch Tuesday month from Microsoft of, hey, here's a bunch more things that were discovered because it's so much easier now to weaponize an AI model to go and help find and discover these vulnerabilities.
And even scarier than that, to then convert them, the high cost of building the exploits is much, much lower now.
We actually see these vulnerabilities get turned into exploits almost automatically by these models as well.
So what that means is now the cost of developing an attack is extremely low and the sophistication of developing an attack is low, which means that now cybercriminals and other groups that typically didn't have that kind of sophistication of a nation state have that power now.
And that makes every CISO now have to worry about being patient zero.
So I'm scared of adversarial AI, but I do feel hopeful that defensive AI is our secret weapon to help battle the incumbent that's coming from that.
The attackers, they don't care about that so much. They don't care if their AI goes rogue or if their bit of vibe coding goes wrong. They don't have compliance departments.
It feels a bit of an unfair fight. How's that gonna play out over the next couple of years? Are things gonna get even worse? Are we gonna be able to keep up?
So if you look at when machine learning became pretty rampant in the adversarial side, you know, maybe the 2010s era, we saw this idea of polymorphic malware where we used to have antiviruses that had signatures that could identify these malicious files and they were all pretty commoditized.
And then all of a sudden adversaries used machine learning to craft and change those signatures every time a file was downloaded to make it polymorphic.
And all of a sudden it was beating all those systems and we had to come out with a brand new technology.
We started implementing machine learning detections and preventions on the endpoint itself, but that took time, right?
So the adversary had an advantage for some period of time before we caught back up. And I think that's where we are right now.
We're in the world of where the adversary has an advantage.
I mean, you could see we have these massive supply chain attacks happening pretty much every couple months newsworthy attacks coming out.
And I think we're going to keep seeing that until we get better at things like AI red teaming, you know, using AI on the defensive side.
We've had some success there of putting our researchers, giving them AI access to kind of empower them to find these problems before adversaries do.
And it helped us with things like the Axia supply chain attack. Again, I have faith because I see this and I have hope that we will catch back up.
But to your point, I think probably get a little bit worse before it gets better.
And I think this year, public sector, in the US especially, is usually a little bit slower to adopt newer technologies.
The new White House cybersecurity policy, it speaks about AI everywhere, right? One of the core pillars is AI as a defender.
So if the government's there, I think that that's a good sign that the rest of the industry is pushing forward and leveraging AI.
Now what we have to do is avoid the buzzword bingo and the vendor FUD of, you know, putting AI in front of everything and not knowing what it truly means to have transparent and trustworthy AI within an organization.
But at least I think we're getting better now at seeing more companies trying to go down the path of implementing it properly.
Yeah, AI which logs in and does work running around inside your network with their own permissions, potentially these sort of non-human accounts. Yes.
It's a struggle enough dealing with humans, isn't it? I mean, if we've got AI helpers which are acting on their own as well, what happens if one of those gets hijacked?
I think we're still learning as an industry, right?
You had this idea of tracking malicious use of credentials, right? Malicious insiders, which really could be just compromised credentials.
Even there, we're still as an industry getting better at finding them, not producing a huge amount of false positives because humans are not typically predictable.
When you add to that predictions of these agents going from the thousands to millions to billions in the next 5 to 10 years, that's an exponentially higher volume than the humans that we were having to manage and control within the organization.
So I think that is definitely a concern and ensuring that we have secure code by design from the outset, ensuring that we are limiting controls at the beginning of the implementation of the agent and not thinking about it afterwards saying, oh, we'll buy a product that will protect us later.
We have to be in the development process of these AI systems, implementing guardrails, implementing controls as the agents themselves are being built, not trying to layer something on afterwards.
But you're right, there already is a brand new attack surface here, which is this idea of harnessing and leveraging these agents to enact attacks on your behalf.
If you have an AI helper which is assisting you during a security incident, it might make mistakes just like a human.
Explaining all the reasoning steps and allowing you to understand how it made a decision.
And even better, it has to have a human on the loop type of activity, meaning before anything destructive happens in the organization that a human is able to review is crucial.
If you were a SOC manager today and you hired a junior analyst, you don't give them full control to go kill a process and delete a host off the network.
They already have checks and balances in place to ensure that that human is properly trained and properly implementing the processes and procedures.
In the same way we view that happening with agents, there's going to be a set of autonomy you're okay with, and there'll be a set of things that are too far beyond the fold of risk.
You know, they're not making expert decisions and they have to have a review cycle above them.
They're still saving you a phenomenal amount of time because they're doing a lot of that work.
I've been reading some of your content and one thing which stuck out to me was everyone seems to be obsessing at the moment over which AI model to use, but that's not really the bit that matters, is it?
It's not so much about the AI model, it's the data behind it. What does that actually mean in practice for a security team?
There's lots of companies that have been started up that are trying to build SLMs or small language models that do bespoke specific actions.
Then now there's also the conversation around token savings, because of course the more we use AI, the more we realize there's a cost to it.
And so then they're, oh, do we need to use a different type of model for the first analysis that's less expensive and then a secondary model for the deeper analysis?
Those are all good questions, but to your point, to me, those are the questions maybe 10 on the list. And question 1 is, what is the model going to do in the first place?
One of the things I would always lead with to a CISO is, you shouldn't buy my product if your process isn't working.
No product will solve a broken process or a lack of a process within a security operations center.
You have to first ensure that you know your business, your mission, when you find a problem, how do you remediate? How do you triage? What are the steps to take?
Because without that information, the AI is not going to make it up. It has to start from somewhere.
It'll have global knowledge, of course, but having that bespoke knowledge to your organization is really critical because not every company triages the same way.
You need that context of what's in your organization. So that's what I think the first piece is, make sure you define those processes.
And of course, we help people get through these and help to elevate those and pull them into the system. And then you're right about the second one as well, which is visibility.
It's even more so now, we talked about exponential data growth with the SaaS explosion when COVID was underway and people were migrating to the cloud quickly.
SaaS data became exponential. Well, now with LLMs, you have another massive new source of data we didn't expect, which is all the logs of that system.
And as you mentioned earlier, what are those non-human entities doing? That's now a whole other corpus of things to track and monitor.
So we have to figure out the data problem and how can we create and manage and store information at scale in an affordable way where we aren't making risk-based decisions based on budget, which is unfortunately what many SIEM vendors have forced companies to do over the years was say, hey, ignore that data because you can't afford it.
Especially now targeting and understanding an organization is so much easier with AI. They'll know what is and is not properly being analyzed.
And that's what they'll hide, they'll hide within those gaps.
What's your company doing differently in this space?
But the company itself is born from Elasticsearch. It is a developer platform loved by people all over the world. And so there's sort of two pieces to it.
The first is what are we doing as a business to help people build, develop, monitor, and manage these apps.
The benefit I get is all the cool stuff we innovate there, I get to utilize.
So on that side, we launched an agent builder, which can be tailored around what it can access, what can it not access.
This thing's allowed to go to, for example, VirusTotal for information, but don't go to Reddit.
And then the really cool benefit that we have within the agent builder, we have a very easy skills framework that allows these things to be automatically triggered by each other.
Give you an example what I mean by that.
In security, we take advantage of this by doing things running a false positive skill that goes constantly over your alerts and identifies things that are most likely not real-world problems, removes them from the corpus, and then it takes the remaining pieces and then runs that through a secondary skill.
So it triggers automatically a secondary skill we call attack discovery.
I talk about it almost a serendipity moment where it uses not kind of atomic indicators like IOCs, hashes and domains. Instead, it uses behaviors.
It follows things the MITRE ATT&CK framework and looks for where behaviors are linked based on certain attack profiles.
So it was hey, I saw an execution attempt and an exfil attempt, and both of those are related to this adversary.
So this is most likely an attack underway and it'll kind of bubble that up to the analyst. And then that can trigger another skill to do threat hunting and on and on.
And the idea is we want the analyst to just get a final product.
It's queued up and ready to go because we have workflows where you can say, hey, you know, hit a button and now we'll fix it.
You, of course, could choose to let it fix it, but we are, again, we're very strong believers of human on the loop so that you can say, hey, pause here, let a human analyze.
That I think is critical.
And we can only do that because of agent builder, it's very easy for an organization to then go into the agent builder and continue to tune and develop their own areas around that.
And the other key piece is just the nature of our business. You know, Elastic is a community open source-based company. We knew we had to meet our customers where they are.
And a big part of that for me, one of my largest verticals is the global public sector. And many of them can't connect to cloud.
Either they're unable to 'cause they're in the ocean on a ship somewhere, or they're not able to based on risk. They're in highly secured environments. Performance.
And so we had to build AI in a way that was able to be used even if it was a disconnected model on-premise, an agnostic approach.
And so the path we chose was to use a choose-your-own-model. Of course, we deliver one if you want it, but it's an agnostic approach where you can literally choose any model. Right.
You know, the belief for us is cloud-first, not cloud-only, right? How can we make sure that our customers are supported no matter where they go?
They've heard everything that you've been talking about and maybe they're feeling a bit panicked, a bit daunted by it, what's the first thing that Elastic would help them tackle?
No offense to anybody out there from sales, but a lot of time if you're a small company and you try to get enterprise support, you call in and they're like, oh, you know, you're not tall enough to ride this ride and they don't even respond, right?
You can go to our cloud, cloud.elastic.co. You can deploy an entire product and you could pay with a credit card per month if you want.
We believe that we have to have enterprise-class software for everyone.
And I think secondly, the thing that I'm really excited about, and we actually just launched the first MCP application for security, which is the difference between the typical MCP servers and an MCP app is an app you actually can include, you know, visualization elements.
So when you're in, let's say Claude and you're typing, hey, help me, it gives you chat back, but it also can give you an interactive UI, which is what a lot of people are used to.
We've actually built that directly into the product as well.
If you haven't done security before, if you're scared of all the, you know, pages you've seen and other solutions that look a little bit heavy, go to chat and say, hey, I want to stop the recent Axios supply chain attack, and it'll turn on the rules for you, get you operational and running.
But the reason we think that's so critical is because a challenge of the industry is that we sort of forced English as the natural language of security everywhere.
Every product is sort of defaulted to English, and many people don't think in English. They think in their natural language and have to translate.
Well, with chat, we have multi-language models. You can go in there and type in your language and it'll go and actually solve the problem.
So this idea of removing that translation barrier, right, is so critical.
If anyone wants to try out for free, there is a free trial of Elastic Security, the Agentic Security Operations Platform.
All you've got to do is go to smashingsecurity.com/elastic to find out more. So all that remains for me is to say, thanks very much, Mike, for joining us on the show.
I'm sure lots of our listeners would love to find out what you're up to and follow you online. What's the best way for them to do that?
By the time this comes out, an article might have come up with the head of cybersecurity with a Formula 1 team, which is very interesting to speak to them about.
But aside from that, all the places you usually do expect, Blue Sky, LinkedIn, search my name and journalist at the end and you will find me.
Not the stand-up comedian in New York, not the professional wrestler, not the South London murderer.
And don't forget to ensure you never miss another episode.
Follow Smashing Security in your favorite podcast apps such as Apple Podcasts, Spotify, and Pocket Casts for episode show notes, sponsorship info, guest lists, and the entire back catalog of 467 episodes.
Check out smashingsecurity.com. Until next time, cheerio, bye-bye.
You've been listening to Smashing Security with me, Graham Cluley, and I'm ever so grateful to Danny Palmer for joining us this week and to this episode's sponsors, Elastic, Vanta, and CoreView.
And also to the following fine folks who've been supporting us via Smashing Security Plus. They include Henry Waldman, Walshaw.
Sounds like he's the captain of a village cricket team. Henry, you're clearly a gent. Govindacharya, Scotia. Joining us from somewhere that may or may not rhyme with Nova.
Alex Tasker, Corrie, Geoff Ambler. That's Geoff with a G, which is the correct and superior spelling. Mark Norman, John Morris. That's John with no H.
John, clearly a man who likes to save ink. I can respect that. Stijn, giving us proof that vowels are optional, 'cause it's Stijn with a J.
And clearly he's really good at the Dutch version of Scrabble. Stepatronic as well, name that sounds like a 1980s Casio keyboard preset. Well, whatever it is, we love it.
And those are just a few members of Smashing Security Plus, which is our Patreon platform.
It means that those people get their episodes ad-free earlier than the general public, and they can have their names pulled out at random to be mercilessly mocked at the end of the show.
If you would like to join Smashing Security Plus, all you've got to do is head over to smashingsecurity.com/plus.
Thanks to all of you who do that and help support the production of this show.
You can become a patron, but you can also support the show in plenty of other ways that won't cost you a penny.
For instance, you can like and subscribe, you can leave 5-star reviews wherever you listen, and you can tell your friends about the show.
Go on, go and spread the word because every little bit helps and it really does make all the effort worthwhile.
Well, thanks very much and I hope to speak to you again this time next week. Until then, cheerio, bye-bye.
Host:
Graham Cluley:
Guest:
Danny Palmer:
Episode links:
- ICO fines South Staffordshire £963K over 2022 breach – The Register.
- US bank reports itself after AI customer data mishap – The Register.
- Hackers abuse Google ads, Claude.ai chats to push Mac malware – Bleeping Computer.
- Canvas hack: What we know about apparent cyberattack that impacted thousands of schools – CNN.
- Canvas hack: Company pays criminals to delete students’ stolen data – BBC News.
- Post by @amosmagliocco.bsky.social – Bluesky.
- Post by @sethcotlar.bsky.social – Bluesky.
- The Architecture of Deception: How a $187 Million Fraud Ecosystem Exploits Trust Across Australia and the United States – Group IB.
- The Fake Nobel that Duped the Romanian Academy – Scena9.
- A (Very) Short History of Life On Earth by Henry Gee – Waterstones.
- Smashing Security merchandise (t-shirts, mugs, stickers and stuff)
Sponsored by:
- Vanta – Expand the scope of your security program with market-leading compliance automation… while saving time and money. Smashing Security listeners get $1000 off!
- Elastic – AI is transforming security operations, but security is still a data problem. Learn how context-rich data drives faster, more reliable defence.
- CoreView – How secure is your Microsoft 365 tenant? Find out with CoreView’s free Microsoft 365 Tenant Security Scanner.
Support the show:
You can help the podcast by telling your friends and colleagues about “Smashing Security”, and leaving us a review on Apple Podcasts or Podchaser.
Join Smashing Security PLUS for ad-free episodes and our early-release feed!
Follow us:
Follow the show on Bluesky, or join us on the Smashing Security subreddit, or visit our website for more episodes.
Thanks:
Theme tune: “Vinyl Memories” by Mikael Manvelyan.
Assorted sound effects: AudioBlocks.
