
Fears are raised about cyber bioterrorists, there’s a widespread blackout for IoT devices caused by a cloud cock-up, and what role do strippers play in a revamp of the United States’s computer crime laws?
All this and much more is discussed in the latest edition of the award-winning “Smashing Security” podcast by cybersecurity veterans Graham Cluley and Carole Theriault, joined this week by Mark Stockley.
And don’t miss our featured interview with Steve Salinas of Deep Instinct, discussing ransomware.
Show full transcript ▼
This transcript was generated automatically, probably contains mistakes, and has not been manually verified.
We have had quite the year, so Graham Cluley and I have decided that any monies we receive via Patreon during the month of December 2020 will go directly to our local food bank.
We're doing this because there are a lot of people that are hungry and it's getting cold out there and it's Christmas.
If you're not a Patreon supporter, which is totally fine, I do urge you to look at your communities to see how you might be able to help bring a little bit more joy this season to those that are having a hard time.
And lastly, just a huge thank you for all your support this year. It has meant the world to us. Now let's get this show on the road.
Hello, hello, and welcome to Smashing Security, Episode 207. My name's Graham Cluley.
At least, I felt that way until you piped up and said, actually, he's not the poshest-sounding guest we've ever had, because we've had Dr. Jessica Barker on.
And Carole, we've got some news, haven't we, on the livestream front?
So Graham Cluley, talent and friends, maybe Mark, you would like to join us as a friend, but you might have to perform.
We're talking to people who are thinking about doing songs or street dances. So just saying, high caliber.
And as Carole said, it will be on December the 17th, Thursday, December 17th at 8:00 PM UK time. And what other times around the world, Carole?
Now coming up on today's show, Graham is going to scare us with research on cyberbiological attacks.
Mark Stockley laments a broken smart vacuum, and I find out why the US Supreme Court is talking about the Computer Fraud and Abuse Act.
And we also have a featured interview with Deep Instinct's Steve Salinas. We do a deep dive into ransomware and how it's impacting us all in 2020.
All this and much more coming up on this episode of Smashing Security.
They produce some of the most fascinating and wacky, crazy bonkers security research that's going.
In the past, they've described how data could be leaked by your computer monitor's brightness, how your headphones could be reprogrammed to record your conversations, how data could be stolen from air-gapped PCs through the fan or ultrasonic emissions— not that kind of emission, Mark— through your built-in speaker, and much more besides.
Really crazy, bonkers stuff. And now those researchers claim to have discovered a new end-to-end cyberbiological attack.
Today, you can literally log into a website, upload the DNA sequence you want, which is, you know, G-A-T-A-A-A-T-C-T-G-G-T-T. You know, DNA sequences, all those characters.
So they're worried about terrorists, for instance, going to these websites, uploading the DNA sequences of bioweapons or dangerous viruses, or changing a harmless bacteria into one that could be a deadly toxin.
So what they've done, the US Department of Health and Human Science have put together guidelines for scientists and for these websites for how to screen requests for synthetic DNA to stop naughtiness happening.
That would— imagine the transcribing of that. That could go terribly wrong as well, wouldn't it? What's so wrong with using the web?
If you answer these kind of questions incorrectly, it rings a few alarm bells. Like, would you rather pay in cash or cryptocurrency?
Do you want the DNA shipped to you in an unusual way? Or, you know— What does that mean? Do you have any special requests? Like, don't tell the feds what I'm doing, or, you know—
So they do, they first of all say, who the hell are you to be asking for this stuff?
So similarly, they have that kind of test. So you have to get through that hurdle first. And of course, a web form always, 100%, will weed out any ne'er-do-wells at that point.
But these are the official ones which they are encouraging these websites— because of course, anyone can set up a website if they want to.
And you may be some bizarre foreign state which is doing this as well. But let's not even get there. I hate this.
The next thing is, what on earth do you want?
So what they do is they look at this DNA sequence, which you are ordering, and they check whether it contains any sequences which have the potential to pose a severe threat to human life, animals, plant health, and other things like that.
Life, basically. Yes. So they've got, they've created some kind of database of common bad stuff and say, if we spot any of this, I imagine they do something like a grep.
Who knows? But if you did—
Can I order a ladybug, but one that's 9 feet wide?
So I'm not a geneticist, but my understanding of genetic code is that there are very large parts of the human genetic code that we don't know what it does and are quite possibly useless, just kind of remnants from previous iterations of humans.
And who's to say if they're harmful or harmless if they were activated? I'm very concerned. Mostly concerned about the giant ladybug.
Specifically, the Weapons of Mass Destruction Coordinator.
They say that there's a few problems. One is there's no comprehensive database of pathogenic sequences. So all the bad stuff.
So they said the guidelines are fundamentally outdated already.
But more than that, they created a proof of concept cyber attack which could obfuscate a nasty toxic DNA sequence in such a way that it wasn't picked up by the screening.
So it would muddle it up, but you'd get the same result.
What they're talking about is they could actually infect legitimate laboratories who are asking for synthetic DNA.
And with a browser plugin, so if they managed to install a browser plugin, when the scientists cut and paste their DNA sequence, they could actually intercept that and inject some of their own nasty DNA in there as well.
And in their tests, 50 obfuscated DNA samples, 16 of them were not detected.
So around about a third of their attempts were successful to sneak past effectively malicious toxic DNA, which could then end up in the hands of people.
And they went through and they passed the tests.
As with much of the stuff done by this particular university when it comes to cybersecurity threats, it's not necessarily something you should lose too much sleep over, despite Mark's nightmare vision of a giant ladybug at night.
But clearly, better screening is required.
So just relying on a computer to look for particular sequences of characters is something which has to be maintained and make sure that someone's not trying to slip something past you.
So there you go, we've all learned about cyber biosecurity today.
And I just have to fish the ball of fluff out. But you know, another way of getting rid of the ball of fluff is obviously to throw the whole thing out, buy a new one.
So they just naturally stopped working on the three times during the year that we used it.
So what about your doorbell? What would you do if your doorbell stopped working, do you think?
Oh yeah, you've got to turn it off and on again. That's the first thing you learn, isn't it? What's the first thing IT tell you to do, people in IT support?
Now, the reason I'm asking the question is because on November 25th, so just last week, a bunch of people in the eastern United States suddenly found themselves with exactly these problems.
So, vacuum cleaners that didn't do what they were supposed to do, doorbells and thermostats that stopped working, podcasts that wouldn't upload.
Don't know if that affected you guys.
And of course, as Carole has guessed, what all of these things had in common was that they are all modern, internet-connected smart devices, or what we like to call part of the Internet of Things.
So in other words, they are part computer. Now unfortunately, what was affecting these devices is that the computer part was on the fritz.
And so computers, when they break, as we just discussed, what you need to do when it breaks is you just need to turn it off and on again. But there was a problem.
Because these things were all part of the Internet of Things.
So they're internet-connected devices, and what that means is the computer part, or at least a very important part of the computer part, isn't in the device.
In fact, it's not even in the same house. It's actually out there somewhere in the cloud. And so the question is, how do you turn the cloud off and on again?
This is a whole bunch of people in the eastern USA, suddenly all these vacuum cleaners stop and all these light switches stop and all these thermostats stop working.
Because in this case, the part of the cloud that failed was actually an Amazon service called Amazon Kinesis, which is one of thousands of Amazon cloud services that you've probably never heard of.
That it turns out your entire life depends on. It's amazing, isn't it?
So, I mean, it's a very interesting story actually, because you think about Amazon as, you know, back in 2000, Amazon was just— it was basically a shop, wasn't it?
It was a bookshop. But it was a very, very big bookshop, and it was on the internet.
And what they wanted to do was they wanted to start letting other people use their shop, sort of create websites for other people. Who could use their shopping technology.
And they realized that all their stuff was a total mess.
And so as they kind of cleaned it up and worked out how to get all the bits of their own company working smoothly together, they realized in about 2003 that they had this fantastic cloud infrastructure.
Yeah. And they could start selling that infrastructure to other people.
And I first heard about this in about 2007, and it was mind-blowing even back then, but it was just the whole idea of these sort of servers that weren't servers, they were just computing blobs.
They actually looked at their houses and how much of their services rely on Amazon.
Which in its own words, it's there to collect and process and analyze real-time streaming data.
So it's things like video and audio, but it's also telemetry from Internet of Things devices.
So what that means is you have apps and devices and websites that use Kinesis, and Amazon uses it as well. So its own services make use of Kinesis.
So there's a thing called Cognito, which is used for authentication that relies on Kinesis.
There's something called CloudWatch, which is used for monitoring, that relies on Kinesis.
And what happened was Amazon decided, because Kinesis is so popular, it needed to increase the capacity of Kinesis in its US-East-1 data center.
I can sense something bad is about to happen. They've produced this fantastically impenetrable very, very serious document to explain what happened.
But I am going to attempt to translate this into short words that, you know, that simple people and Carole will understand. My attempt for an explanation for people like you.
And so what you want to imagine is that this isn't one thing. This is actually thousands and thousands of servers, right? Yeah. That are all receiving these requests.
And all of these thousands and thousands of servers are all aware of each other. So they all keep count of each other.
So you imagine that each of these servers has got two hands, okay? And they're counting on their hands all the other servers. And what happened was Amazon added too many new servers.
And so the existing servers, which have to keep count of all the other servers, they no longer had enough fingers on which to count all of the servers.
So what happens is you have this pool of computers, and they're all kind of watching each other going, "Oh, I see more computers are being added. I need to keep track of that one.
I need to keep track of that one." And it exceeded each computer's capacity to keep track of other computers.
So they ran out of fingers to count the number of computers on, and then they all stopped working. So how do you fix a problem like that?
It reminds me of my time on Air Canada, every single flight I've ever taken where they have to stop and restart the television or whatever entertainment system due to some glitch somewhere.
I mean, limited sympathy, because obviously he did decide to buy lights that depended on the existence of a supercomputer. And I don't understand—
Now, back in the '60s, when they invented the internet, they had this idea of a communications network that was so resilient, it could withstand a nuclear attack.
The whole point of the internet is that it routes requests through the undamaged part of the network.
And then our generation has taken this amazing blueprint and gone, what we're going to do is we're going to put this massive centralized system smack bang in the middle of it, and all of your devices are going to be dependent on this centralized system, which occasionally will fall over and someone will have to kick, and then it will restart.
Maybe. What's clear from the write-up of it is that there was an awful lot of sweating. And, you know, I imagine there was a lot of whiteboarding.
Or what will happen if in an emergency we take 1,000 out?
I mean, if they had a fire or something, maybe you'd understand. But this was of their own volition. They added more computers and caused their own problem.
If there are simulations, the simulations are obviously incomplete.
But I think that the worrying aspect of this is this is just where we are now. Like the speed with which we are acquiring devices that are dependent on an internet connection. Yeah.
Not just that have one, but don't work without one. It's rather alarming.
Okay, if your Hoover doesn't work, well, Graham managed to get through 3 years of being a student without a Hoover that worked.
So, you know, it's not that bad, but there are all sorts of things connected to the internet now.
So if you own a Tesla car, for example, your Tesla car is effectively part of the Internet of Things.
Tesla can turn off your car whenever they want to, because it's dependent on that umbilical. And who knows what else?
This is the first time the Supreme Court has ever heard arguments to or against how the Computer Fraud and Abuse Act is currently designed.
This is America's main anti-hacking statute. Okay. And the Supreme Court are looking at the scope of the CFAA law and how it is and can be interpreted.
And so you have the court's nine justices have a range of views on the question. And I'm inviting you all to don your Supreme Court judge hats or robes, rather. A wig?
Now, is it just me, or do we feel like, as you've just talked about, Mark, we're going through this incredibly huge technical revolution, and one of the richest countries in the world is depending on a 35-year-old law?
So it turns out that many, many Americans and organizations in the US have inadvertently broken this federal law repeatedly because inside the 1986 law, there is a broad definition of what's considered hacking.
Okay, so quote, the law considers any intentional access to a computer without authorization to be a federal crime.
Okay, now as CNET point out, this is broad enough that sharing a Netflix password could be considered a CFAA violation. So that's a—
But it does mean that Americans are extremely reliant on how individual prosecutors and individual judges understand and decide to enforce this law.
And sharing your Netflix password is a naughty thing to do.
Would that mean that a 12-year-old who starts a Facebook page breaks the rules because she's basically not authorized to have an account? Does that mean it's a federal crime?
What if someone shares their logins with a third party in order to get IT support from someone?
And I think, as you'll see very quickly, there's a loophole here that is kind of scary for our industry, Mr. Cluley.
So the core issue that they're discussing is, should violating something the terms of use on a website or a computer system lead to legal trouble at a federal level?
Using the CFAA as your umbrella. So that's one. Now there's a second angle to consider.
There's a group of people who are not happy about this 1986 law and its potentially incredibly broad remit.
And this is our cybersecurity researchers, because many cybersecurity researchers' work involves finding vulnerabilities on software and gadgets without a company's authorization.
So election security researchers at MIT uncovered issues with voting machines without the approval of the manufacturers.
So they wrote it up and presented at the USENIX Security Conference earlier this year.
Okay, and that they called it 'The Ballot is Busted Before the Blockchain.' So this is a blockchain e-voting company known as Voatz, V-O-A-T-Z.
Oh yeah, these guys, piggybacking on an unrelated CFAA case, argued to the Supreme Court— Right? This has been September.
That security research conducted by MIT on their machines that found several security flaws breaks the CFAA and should not be allowed, not because the research was wrong, but because they were not authorized to conduct said research.
Right. And that's what's being discussed right now.
It's a long time after that.
Do you know, it reminds me of if you ever read about people that try hacking into cars, because obviously, you know, cars are part computer now, just everything else.
The trouble that they have to go to, to avoid poking any of the bits that they're not allowed to poke. Is really quite excruciating.
And when you read it, my reaction on reading that stuff is certainly, there's something wrong here, that they are assuming that they are going to be ethical with what they discover.
It seems there's something wrong with them not being able to look. And in whose interest?
But whose interests are being— I guess maybe there's concerns about intellectual property or something like that, that, OK, here's a black box and it's full of proprietary stuff and you're not allowed to look inside it.
But it seems to me the greater good is normally served by people being able to poke around.
And right, and soon after its tap dance in front of the Supreme Court, these guys responded publicly in an open letter saying, you know, security research is vital to the public interest.
And they say a broad interpretation of the CFAA, which is what we currently have, it risks undoing many of these positive advancements, being able to discover security vulnerabilities in election machines, for example, which is a big deal.
Voatz's actions threaten good faith security research are indicative of what may come should the courts decide that a breach of controlled terms constitutes a criminal CFAA violation.
They urge the courts to adopt a narrow interpretation of the CFAA.
As you plug your brain into your IoT device so it looks at your brain activity to make sure— maybe they should actually insert inside their terms and conditions some random words about unicorns or—
So, I mean, this sounds very interesting, Carole, but surely there's a concern that this could go too far the other way as well.
But they couldn't for whatever reason, a side reason, and they couldn't get you. They could get you on this federally.
And the way they'll get you is by sharing your Doctor Who password with someone.
So there is a silver lining if they do. I think they totally should. I think it's insane that we're relying on something that is almost 35 years old. It's insanity.
This guy says he's keen on this stripper, right? But he's nervous that she might be an undercover agent.
So he goes to a Georgia police officer, and he pays the police officer to look up her license plate in a confidential database.
You know, the way we see every single cop do in every single show that we've ever seen, right? Just check her out online.
Anyway, in a nasty twist, the unnamed guy, the one who was keen on the stripper and paid for the intel, was in fact the FBI agent.
So therefore, it's not that he had unauthorized access to this database, but he did misuse that database for other purposes.
So everyone's discussing this, and this is the case that opened up all these floodgates and why it's in the Supreme Court. So it all came down to strippers.
The Supreme Court has never ruled on this law. And then a story about strippers came along, and now it's like, yep, that's the one. We'll have that one.
Deep Instinct strives to prevent all known and unknown threats using deep learning.
Making detection and response automated, fast, and effective for any threat that cannot be prevented.
Check out a report by the Ponemon Institute which studied the cost savings of adopting an efficient prevention model. Go grab it at smashingsecurity.com/deepinstinct.
And thanks to Deep Instinct for sponsoring the podcast. This episode of Smashing Security is sponsored by LastPass.
Now, everyone knows about LastPass's password manager for end users, but it's also a great solution for businesses.
In fact, tens of thousands of companies rely upon LastPass to protect themselves.
LastPass Enterprise simplifies password management for companies of all sizes and helps you secure your workforce. So whatever the size of your business, go and check it out.
Go and visit lastpass.com/smashing to find out more. And thanks to LastPass for supporting the show.
Security training sucks, it's boring, users hate it, they aren't paying attention, doesn't work.
For security training to actually work, you'd have to find out what each person in the company is doing that's risky, send them phishing emails, monitor logs, check for passwords and have I been pwned, and then you'd have to train them in a way that doesn't send them to sleep, try and track what they're doing to see if it worked.
They make this amazing software that plugs into your company, runs your phishing campaigns, integrates with Slack, tests if your users accept phony MFA requests.
Yes, that's a biggie. And pulls in tons of other behavioral metrics from your existing apps.
It basically figures out what everyone needs to know and then creates personalized training that is not boring. And it even checks that it's working. And it's all done automagically.
And they've got a deal just for our listeners. Sign up at culture.ai/smashing and your first 50 employees are free. For life. Cool. More information, culture.ai/smashing.
Stop your whining, Graham.
Could be a funny story, a book that they've read, a TV show, a movie via record, a podcast, a website, or an app, whatever they wish.
It doesn't have to be security-related necessarily. Better not be. Well, my pick of the week this week is something that I'm sure many people have already checked out.
I'm a few weeks late to this.
I did have a Smashing Security listener, a couple of them actually, contact me and saying, Graham, are you going to talk about this TV show on the podcast?
It seems right up your alley, mate. And eventually I got around to watching it. And I've nearly finished it.
I haven't quite— I've got a couple more episodes to go, but I can already tell—
And it's drama, she didn't really exist. And it's a really good show, it's quite enjoyable.
And the amazing thing is, of course, that normally when chess is presented on screen, it's all a load of old nonsense and it's not actually anything like chess.
But The Queen's Gambit, I'm sitting there saying, yeah, that is the Caro-Kann Defense. Yes, that is what, you know, the Queen's Gambit Declined or whatever.
And they're referring to things in chess and they're absolutely on the nail.
And maybe the reason for this is that Garry Kasparov, of course, is famous for being the former world chess champion and even more famous for being a past guest on Smashing Security, acted as a consultant on the show.
So well done, Gary. I know he's listening. Well done, Gary, for doing that. And coming up with a great TV series, The Queen's Gambit on Netflix. I enjoyed it.
Have either of you watched it?
I saw on Twitter in the last couple of days, security researcher Sarah Jamie Lewis discovered that they were able to exploit the popular Stockfish chess engine by feeding it malformed chess positions and can cause it to crash and do naughty things.
When trying to find a best move, or even trick it into believing there were no valid moves, even when it appeared that there were.
So if anyone's interested in that, it's a bit nerdy.
And she does endless amounts of really, really interesting stuff, whatever she gets into. She does very interesting things with it.
So I'm not at all surprised to hear her name attached to this.
And on that point, I pass over to Mark Stockley for his pick of the week.
So it's a beautiful book. I listened to it as an audiobook over the summer. And it's by an unassuming Japanese research scientist called Masanobu Fukuoka.
And he basically turned away from science and back to quiet life on the farm after what he describes as a profound spiritual experience in 1937.
So he trained as a microbiologist and an agricultural scientist, and he became very disillusioned with what he saw as westernized ideas about agriculture and farming.
And this is back in 1937, remember?
And our capacity to produce chemical nitrogen for bombs ramped up massively during the war.
There was this huge chemical industry that existed after the war that wasn't there before, which is where the sort of chemical fertilizer industry comes from.
Anyway, he predates that, but his alarm bells were going off. So long story short, he basically pioneered all sorts of techniques for growing food that were way ahead of their time.
So he called it his do-nothing technique, but we would call it things like no-till, organic sustainable agriculture, things like that.
But it's a fascinating read because it's not just an instruction manual, it's kind of part manual, part memoir, so it's his life story as well.
But also there's this kind of dose of Eastern mysticism in there as well. It's like one person described it as Zen and the Art of Farming.
And so there's a very kind of Japanese quality to it where you can see that he's trying to be a bit kind of mysterious and a little bit mystical with it as well, as well as being very practical.
So this great little book.
I'm not a Telegraph reader, but I happened upon this podcast and I decided to give it a whirl. It's called A Bed of Lies, or Bed of Lies.
Now, it's kind of true crime, kind of investigationy. And so in the intro, Cara says of the show that it looks at one of the biggest scandals in recent British history.
So I'm holding back on agreeing with that or not, because I haven't finished it yet.
So you gotta just trust me because that's part of the way she kind of does it, is she holds off on telling us what the biggest scandal is until episode 3 or 4.
They were all part of a lively activist community a few decades back, and they found partners who shared their passions for activism and seemed perfect until it totally wasn't.
So big 180 happens.
I think they should have given it away at the beginning and then just gone with the story personally, but you know.
The first episode is hesitant, but I don't know, it's almost like the host and the producer are finding their feet or something, but it gets a bit pacier and the story's pretty juicy.
And it's, I don't know, the stories from the women are actually pretty honestly told. It's quite good. So if it sounds like your thing, check it out wherever you get your podcasts.
It's called Bed of Lies, hosted by Cara McGoogan.
And, well, I don't want to give anything away.
So hi, Steve Salinas, product marketing manager at Deep Instinct. Thanks for having me.
So for our listeners, there's this woman who looks very lovely, and then you look at her and her eyes are neon phthalo blue, and then you zoom into her eye and it's all sci-fi.
So I just want to know what the thought process was behind that.
And it kind of is an interesting way to get someone's attention.
So the idea of, you know, kind of the way a human brain works, the way that our brain makes decisions, we're using technology that works in the same way to solve cybersecurity problems.
So totally, it works.
So tell me first, tell me a bit about Deep Instinct and what you do?
So what that's kind of a long way to say is that we're applying artificial intelligence, which we interact with every day, all day.
We're applying a form of it, the most advanced form, which is known as deep learning, to identify threats as early as possible, which is known as pre-execution.
Using a deep learning neural network so that we can identify these threats and prevent them from ever having the chance to run in an environment.
So it's really our company, what we call it, we are a prevention-first company.
So our whole idea and philosophy around security is that the best way to protect yourself is to stop a threat from having the chance to run in your environment.
We offer a lot of different solutions that extend that, but that's where we start about preventing.
There are tons of things going on in there we don't have any idea. So if you think about when you're a child or anything, when you learned how to, let's say, ride a bike, right?
As a kid, you know, the first time you rode a bike, you probably fell off a lot. I did.
The phrase that we're all used to is it's riding a bike. That's not a mistake.
So even though you probably haven't been on a bike in years, if you saw a bike and you got on it, your brain would remember how to ride that bike. It just would.
Yeah, so our brains are very complex and they're very advanced.
So very smart people, a lot smarter than me, they looked at that, the way that the brain works, and they said, all right, we're gonna— we can take the same approach to solving lots of different problems.
So a very brief history about artificial intelligence: it's been around since the '50s.
If you're familiar with Alan Turing, you know, the whole idea of the Turing test was if you can distinguish between a machine asking questions compared to a person.
So that was a form of artificial intelligence, very initial forms.
And then, in the '80s or so, this concept of machine learning came out where you could train a machine based on a set of data to come up with some sort of decision or to take some sort of action.
And machine learning has been around for a while. The latest version of artificial intelligence is known as deep learning. So this is where— this is what we do.
So what we do is we take vast amounts of data. Machine learning, deep learning, artificial intelligence, all about data.
So we take a lot of data around threats, and then we take a lot of information about good files, you know, files we use all the time, applications we use all the time.
Not just a few, millions.
And we have data scientists, the founders of the company, they created algorithms, very proprietary set of algorithms that can take this data, we feed it into the model, which is called a neural network.
And the only thing that we tell the model about this data is this set of files and data is known as malicious. And this set over here is good.
So we feed it in and we look at the results.
And after training, and we train in the cloud, it takes a lot of horsepower to train it, develops an innate ability, getting back to my bicycle analogy, to identify a threat.
It's really astounding in that once it's developed this ability, it doesn't need to know anything else about the files at all so that when we point it at a file that it has never seen before, it's going to be able to come up to a decision whether it's malicious or what we call benign or good in milliseconds, and it's extremely accurate.
So talk to me about ransomware and how— what is deep learning telling you about ransomware.
And the attackers, they're pretty ruthless because the way that they deploy these threats, they could be either targeted or they can use some automation to target vast amounts of IP addresses and whatnot.
But the fact of the matter is, if an organization gets hit by ransomware, they could be crippled, right?
The whole way— if you're not familiar with, for anyone listening, essentially what ransomware does is it gets into an organization if someone initiates it and starts the ransomware, it will go through and encrypt all the data on your computer.
So think about all that data that's on your machine, stuff you rely on, use all the time. It encrypts it and it holds it hostage.
And what the attackers do then is they display on the machines a ransom note. It's very— I mean, it's called ransomware for a reason.
It's just like when someone, when a person gets held hostage, they get a ransom note said, here we have your data. You need to pay us X amount.
It's usually some sort of cryptocurrency. Bitcoin is their favorite.
Or we're either going to destroy your data, or some of the newer forms of ransomware, they say, we're not going to destroy it, but we're going to leak it, which is obviously very concerning to an organization.
You know, it's all sensitive data.
There are several you can use.
So I mean, there are definitely lots of stories that you hear where this was a few months ago, I believe it was some, you know, I don't want to, I'm not going to say the name of the university because I don't want to get it wrong, but they were doing some COVID research and they got hit by ransomware and they had no choice but to pay the ransom.
And it was, you know, in the millions of dollars because this was extremely valuable data that they needed for their research.
A university is going to have a lot more money to pay than, you know, just Joe Public.
But at any rate, there was an executive from some large financial institution that would regularly use his home computer to access his work email and things like that.
Well, he also had a son that would use that machine to go play online games.
And somehow the attacker realized, you know, they were looking and they realized, oh, this kid is using this machine, and they did some sort of social engineering and got access and figured out, oh, this is an entryway into this bank.
So not only before it was all right, company, you know, they're all behind the firewall, they're in buildings.
Now the firewall has kind of virtually disappeared and everyone's at home.
So now the smart attackers, and they're doing it, they're identifying, oh, okay, all of these machines are now all over the place.
So if I can penetrate Steve's machine, I can get into the company. Let's face it, people's protections at home are a lot less sophisticated than an organization's would be, right?
The old school, which doesn't work very well, is I guess I'd call it legacy antivirus.
So this is where there's a piece of software that's installed on all the laptops just for simplicity, and it pulls down a list of known ransomware and its signatures.
So the software, if it identifies there's a file that matches one of these signatures, it doesn't let it run. But guess what?
The attackers know that, so they have gone long past that.
So it's easy for them to make small modifications to their ransomware, and it will completely evade that type of protection.
So then there's the machine learning approach, which is kind of this category what I call the next-gen players.
They look at the features of ransomware, so it's a little bit better and they train a machine learning model to identify threats.
But then the attackers, really smart, all that what they've done is they've identified the features that these machine learning models are trying to use to identify the threat, and then they just simply don't use them or they change them.
They bypass two different types of protection. Now, our protection seems to be the best.
You know, one of the things that I do all the time is look at the latest ransomware and I run it against our model. And it's very, very effective because we're not using features.
Again, we're training against this vast amount of data and the deep learning neural network is making a decision. It's taking any sort of route to get to that decision.
It's hard to say impossible, but it's virtually impossible for an attacker to figure out what decisions it's making to avoid that.
So one good example, there's a really bad strain of ransomware that was hitting a lot of healthcare organization is called— I don't know how you pronounce it. I call it Ryuk.
It's R-Y-U-K. It's been causing major issues.
So the other day I pulled down, and there's a video on our website, I pulled down almost 100 samples of this ransomware, and I ran it against our neural network.
And one thing I want to be clear is once we've trained the model, no additional training happens. So the one that I was using was trained in November of 2016. So two years old.
And I said, okay, analyze all this ransomware. And it identified every single file as malicious. Wow. Yeah. So it's really powerful.
And that's why we're finding a lot of— to get back to the whole point of, all right, all my employees are all over the place, what do I do?
What you need is a protection that is what we call resilient, right? That doesn't need daily updates.
So I could provide this two-year-old model to most organizations in the country, the world, and they're going to get better protection probably than what they have today, even if I don't ever update it again.
So we do do other things, behavioral analysis, to look for things that look like ransomware. It's very rare that a ransomware would be able to get past us, this first phase.
But that's why we're seeing a lot of interest in our approach because it's resilient. It doesn't even require being connected to the internet.
All the decisions are taken on the machine that it's protecting and it doesn't need updates.
So I'm guessing rule number one, please don't lend your computers to your kids if you can avoid it. But what other advice do you have? What do you— what do you do at home?
What do you tell, you know, your family to do?
Or if you get an email that says, oh, I'm from a company that you work with a lot, someone you buy stuff from, but it looks a bit weird, you know, has, oh, we need you to update your information here, or we're going to terminate your account, be very suspicious.
You can also look at the URLs.
If it has a really weird URL, it says it's from a big box store, I don't know, a Target or something, but the URL is completely different, it's probably not them.
We're looking for information on coronavirus and what's going on there. We're getting ready for the holidays.
We're buying loads of stuff online because a lot of us are not allowed out. So we've kind of told the bad guys exactly how to get us.
Some attackers, they said because people were looking for information desperately, they actually had embedded malware in an actual map, an interactive map of COVID. No way.
And fortunately though, we supported that file type. So someone downloaded it, we would identify it as malicious, but not everyone would.
So, you know, the attackers, they don't have any qualms about doing things like that.
You know, just maybe just take a second. It's okay to think for a minute or two and ask, you know, a trusted friend who knows more about this stuff if you're not sure.
There are lots of online solutions or ways to back up your data, you know, they're not very expensive.
So if you do happen to—oh man, I actually get hit by ransomware—well, I always say don't pay the ransom.
You know, it only makes the attack—they're only doing it because they get paid.
So if you have other mitigation strategies, have backups, I use a solution like ours, obviously if you're an organization, the lower we can make the incentive as far as the attackers are not going to get paid you know, they'll go down in the long term.
And don't forget, if you want to be sure never to miss another episode, subscribe in your favorite podcast app such as Apple Podcasts, Spotify, or Pocket Casts.
If our last session was anything to go by, where hundreds of you joined us, asked questions, made friends with other Smashing Security listeners, it was just awesome.
And if our plan for this one on December 17th comes together, it's going to be a YouTube sesh to remember.
We really hope to see you there, guys, because we need to see this shitshow of a year out in style.
And remember, Patreon supporters, any support we receive via Patreon during the month of December 2020 will go directly to our local food bank.
And can I urge you all to look at your communities to see how you can help bring a little joy this season to those who are having a hard time? There's some awful stories out there.
Lastly, a huge shout out to this week's Smashing Security sponsors: Deep Instinct, Sophos, and Bitdefender.
For details on past episodes, sponsorship details, or how to join our Patreon community, check out SmashingSecurity.com.
Plus, you'll find all the details for how you can get in touch with us.
Hosts:
Graham Cluley:
Carole Theriault:
Guest:
Mark Stockley:
Show notes:
- Smashing Security's Christmas 2020 live stream — Join us on YouTube on Thursday 17 December.
- Increased cyber-biosecurity for DNA synthesis — Nature Biotechnology.
- New cyber-biological attack can trick biologists into generating dangerous toxins — News Medical Life Sciences.
- Screening Framework Guidance for Providers of Synthetic Double-Stranded DNA — Department of Health and Human Services (PDF).
- AWS: Amazon web outage breaks vacuums and doorbells — BBC News.
- The Supreme Court will finally rule on controversial US hacking law — Ars Technica.
- 18 U.S. Code § 1030 – Fraud and related activity in connection with computers≈ — Legal Information Institute, Cornell University.
- Online-voting company pushes to make it harder for researchers to find security flaws — CNET.
- The Supreme Court will hear its first big CFAA case — TechCrunch.
- Response to Voatz’s Supreme Court Amicus Brief. — An open letter from the security community.
- The Queen's Gambit Netflix series — Wikipedia.
- Twitter thread by Sarah Jamie Lewis.
- Win by Segfault and other notes on Exploiting Chess Engines — Sarah Jamie Lewis.
- One-Straw Revolution — A book by Masanobu Fukuoka.
- Bed of Lies podcast — The Telegraph.
- Smashing Security merchandise (t-shirts, mugs, stickers and stuff)
- Support us on Patreon!
LastPass Enterprise makes password security effortless for your organization.
LastPass Enterprise simplifies password management for companies of every size, with the right tools to secure your business with centralized control of employee passwords and apps.
But, LastPass isn’t just for enterprises, it’s an equally great solution for business teams, families and single users.
Go to lastpass.com/smashing to see why LastPass is the trusted enterprise password manager of over 33 thousand businesses.
CultureAI isn’t just another security awareness training provider. It helps you measure and improve every end-user’s cyber security behaviour, providing a management system for IT, Security and Awareness teams.
Learn more and try it for yourself at culture.ai/smashing
Visit culture.ai/smashing now.
Most people agree that the most effective way to reduce the cost of an attack is to prevent it from happening in the first place!
Deep Instinct strives to prevent all known and unknown threats using deep learning, making detection and response automated, fast and effective for any threat that cannot be prevented.
Check out a report by the Ponemon Institute, which studied the cost savings of adopting an efficient prevention model. Go grab it at smashingsecurity.com/deepinstinct
Follow the show:
Follow the show on Bluesky at @smashingsecurity.com, on the Smashing Security subreddit, or visit our website for more episodes.
Remember: Subscribe on Apple Podcasts, Spotify, or your favourite podcast app, to catch all of the episodes as they go live. Thanks for listening!
Warning: This podcast may contain nuts, adult themes, and rude language.


