
Author and broadcaster Tim Harford joins us as we discuss the merits of robotic canine security guards, deepfakes, and the curious tale of an art forgery.
All this and much more is discussed in the latest edition of the award-winning “Smashing Security” podcast by cybersecurity veterans Graham Cluley and Carole Theriault.
And don’t miss our special featured interview with James Moore from CultureAI.
Show full transcript ▼
This transcript was generated automatically, probably contains mistakes, and has not been manually verified.
Shout out this week goes to Mansui Dejean, Jacob Lofgren, Alexander Hoogerhuis, Donald Wilson, David Warren, Shelter, Herman A., Emily Lau, and special mention goes to Jan Torkinton, Ask Your Husband Why, also Heartful Dodger.
Thank you very much. If you want to join this amazing group of Patreon supporters, go to smashingsecurity.com/patreon. Now let's get this show on the road.
Did you ever watch the children's television program Willow the Wisp?
Hello, hello, and welcome to Smashing Security, Episode 206. My name's Graham Cluley.
It's Financial Times columnist Tim Harford.
It's all about things going wrong, mishaps, catastrophes, fiascos, some of them hilarious, some of them very, very not hilarious.
But in each case, the idea is there's some geeky lesson. There's something to be learned from the stories of disaster.
And some of the disasters are, I think, security adjacent, so conmen and forgers and that sort of thing that I think are potentially of interest.
It is a guide to thinking clearly about the world, and my argument is that one of the things that we need to think clearly about the world is numbers, good solid data.
But another thing that we need is to get a handle on our own filters and biases and mental shortcuts.
So that's what the book's about, and there is a story in it that I think is relevant to my pick this week, and there's also a story in it that's relevant to your pick, Carole.
So exciting! Yeah, that'll be excuses to talk about the book every 3 minutes or so.
Now coming up on today's show, Graham turns his interest to an Air Force base in Florida with an unusual security system. Tim will tell us of a notorious forger.
And I have a tricky misinformation dilemma for us all to contemplate. And we have a featured interview with James Moore, the CEO of Culture AI.
All this and much more coming up on this episode of Smashing Security.
Well, if you were to do that in Florida today, then you might find yourself in something of a sticky pickle because there is an Air Force base in Florida which has added some new security guards to patrol its facility.
And they're not using humans. They're not even using geese. They are using robotic dogs.
Because if you were to try and protect something with an animal, I'm not sure dog is the first thing I would think of when making a robot.
I would think of maybe something like an alligator or a rhinoceros. Much more terrifying, I would say, than a dog.
Well, you guys may not know this. Did you ever watch the children's television program Will O' the Wisp?
He just had a sort of a face on the end of his body, if I remember rightly. And these creatures are just like robo-Moogs but substantially less cute than the Moog.
But when I think of a dog, I think of something like a Rottweiler or one of those bull terrier things, you know, which is basically a chainsaw controlled by something which has a brain the size of a walnut.
And I think that's kind of terrifying, isn't it? That kind of dog. Anyway, let's get back to the point. Tyndall Air Force Base in Florida.
They are one of the first bases to incorporate these semi-autonomous robot dogs into their arsenal. These mechanical pooches have been developed by a couple of companies.
Ghost Robotics are doing the hardware. Another company is doing the augmented reality. And that company's called Immersive Wisdom. Because what happens is you put your pooch out—
But these dogs, if we're gonna call them dogs, have 360-degree cameras on them, and they can be monitored remotely by people wearing those sort of VR headsets.
So they can see everything that the dog can see. And they can look around. It's almost like they've dressed up as the dog and are going around on all fours.
So the dog's driver, the real human, they can use the speaker built into this robot dog to talk to any intruders and say, I'd say, what on earth are you doing here?
Should you really be in here?
But it seems to me that that's somewhat inefficient because if you see on your camera and you identify that someone shouldn't be there, isn't the natural next step to, rather than send humans in to deal with this person who may well run away or put themselves somewhere which the dog will find difficult or the alligator will find difficult to get to, wouldn't it be better if you were, and I can see this happening in the future, to sort of equip these dogs with tasers or something like that instead, which the security guards could operate?
And as such, you wonder, you know, why can't they just put it in a drone? Can't you just have several cameras and put them on posts? I mean, it's a bit odd.
Robot dogs, provided they're charged. Apparently they have a range of about 7 miles before they have to return to their kennel to be recharged.
But maybe over time they're thinking this actually will be a money saver.
And the cat can snoop around and spy and chomp through wires or pee on electricals, whatever I want it to do, or take photographs of the secret plans or the plane that they don't want photographed.
Surely, especially it being a military base, they're going to at some point sell a tape on some kind of missile or something.
They've even got pictures of them sort of rolling on their back and being tickled on their tummy. Dummies, and some of the army officers are sort of patting them like they're a dog.
So I think there was one in Japan, it was a seal, and they would give it to the people in the home and they loved the seal, you know, and they would share it found amongst users.
So I think a face helps you understand it as a being, and I think it confuses the brain a bit, you know, when it has big eyes and looking at it.
So in a way, maybe it's better that it doesn't have a face. It's not pretending to be anything other than a machine, a CCTV camera on four legs.
It's kind of like, you know, it's a sort of box with— it's very utilitarian and it's very eerie indeed, the way that it moves. Very unsettling. I would run.
Why?
I think it's more about if they're somewhere dangerous. Where they don't necessarily want humans working.
But if they had a device popping around, visiting different things and seeing if anything bad was happening, then that maybe is a better idea.
In Japan, they've been really worried about wild bears. So I heard a story earlier this week about—
There's been a real dearth of those lately. So they've been venturing closer to humanity and into farms. And so there is now a robotic monster wolf, which is scaring away the bears.
And I've put in a little link. I'll put it in the show notes so people can check it out.
And I don't know, it doesn't— on this show, we do tend to be— well, I tend to be a bit of an old fogey. I don't really like technology. And this sort of scares me a bit.
And I'm gonna make a Cautionary Tales podcast about it for those people who want to subscribe.
The story begins in the 1930s in Monaco, where a charming Dutch lawyer called Gerard Boone shows a painting to the world's leading art critic, who's a gentleman called Abraham Bredius, who is in his 80s and is nobody's fool.
He has debunked many a forged artwork. He is expert on Rembrandt and an expert on Vermeer, and Gerard Boone shows him this painting and says, 'We think it might be a Vermeer.
Can I have your opinion?' And Bredius is completely spellbound by this painting.
He writes a piece for Burlington Magazine, the art magazine, saying, 'When I first saw this work, I had difficulty controlling my emotion.
It is not only a Vermeer, It is Vermeer's greatest work.' And anyway, well, you can see where this is going. It wasn't a Vermeer. It was a rotten fake. It doesn't look like a Vermeer.
That's the weird thing. You look at it and you look at a Vermeer and you go, well, I don't know much about art, but those two paintings don't look anything like each other.
What has fascinated me about this story, and what I think is so instructive, is how did Bredius, this incredibly well-respected, incredibly expert guy, how was he fooled by a forgery that wouldn't have fooled me and wouldn't have fooled you?
But no, what happened was, Bredius had a theory, he had a pet theory about Vermeer, who's quite a mysterious figure, amazing painter, not that much known about his life.
And he had a theory about Vermeer, and there's a gap in Vermeer's work where he didn't— he painted some early paintings, he painted some late paintings.
What was he doing in the middle of his life? Where are those paintings? Who influenced those paintings.
And he'd written about this, and the forger, who was a very clever little man called Han van Meegeren, the forger basically painted a painting that fit Bredius's preconceived ideas of what Vermeer might have been doing, who he might have been imitating.
And it contained all kinds of very subtle clues that I would not notice, you would not notice, but Bredius noticed because Bredius is the world expert.
It uses Vermeer's color palette, the pigments, the dyes, all perfect.
And because he was able to identify all of these little pointers, plus this was confirmation that he had been right all along.
He fell for it, and then once he fell for it, everybody else fell for it because he's Abraham Bredius.
And this links into the sort of social science that I talk about in the book that basically says if you are motivated to reach a particular conclusion, if you want to believe it, being more expert, having more knowledge, more intelligence, more information doesn't help you because you simply deploy all of that intellectual armory to reach the conclusion you want to reach.
No, but it's self-fulfilling as well based on your education because then you can go through and you can go, "Oh, but you see, I knew that he's using the Zorn palette," or, "I knew that they were using this and I was aware of all these points, therefore it must be right." And if someone plays you at your own game, you're screwed.
There is a sort of social science literature on this which I describe in the book that gives people the task of evaluating certain political arguments and on hot-button issues like abortion or same-sex marriage, gun control, things that Americans have very, very strong views about.
And basically, people who have more knowledge about politics are more subject to biases in their reasoning.
They find it easier to generate ideas that support their own conclusions, harder to generate ideas that support opposing conclusions because the whole kind of cognitive arsenal is being focused on reaching the conclusion you want to reach.
So it's not just about technical expertise. Thinking clearly is about noticing your own emotional reaction.
And Bredius even said, "Oh, I had difficulty overcoming my emotions." He also said, "It doesn't look anything like a Vermeer, but it's as great as Vermeer." But I know it must be.
It must be. It was incredible.
'Cause once you've done one, you can produce all these others that look similar.
We have found this treasure trove of stolen Nazi art and it includes a Vermeer." And it's Hermann Göring's art collection, Hitler's right-hand man, and it includes a Vermeer.
And the Germans, being Germans, kept the receipts, and they say they bought it from you. And so Van Meegeren was up for treason. He could have been hung for that.
And so he had to prove that in fact he had forged it rather than simply obtained it in some other way, stolen it and sold it to the Nazis.
The Dutch were sick of the war, they were sick of collaborators, they were ashamed. Anne Frank wasn't the only Jew who was shipped out of the Netherlands to the extermination camps.
People just wanted a hero. And here's Van Meegeren, and he's kind of done one over on Hermann Göring.
Actually, when you look at the evidence, he was probably a Nazi, and he was certainly very friendly with Nazis and producing all kinds of antisemitic work and just a really nasty character.
But when he died, he was the most popular man in the Netherlands, other than the prime minister, who bizarrely was extremely popular as well. He was incredibly popular.
He was a folk hero because not only did he sell all these fake Vermeers, But he then sold the story to the Dutch people of this guy who poked Hermann Göring in the eye.
And people would rather have believed that than the truth, which is that he was a really nasty piece of work.
Whenever we see a claim on social media, we see a newspaper headline, very often we'll have an emotional reaction.
We'll be like, oh, that can't be true, or oh, this proves I was right.
And what I'm saying, you can't overcome that reaction, and you shouldn't be trying to suppress your emotions, but you should notice them.
And if Bradyus had been a little bit more aware of his own state of excitement and noticed that and thought, hang on a minute, maybe I need to calm down.
And of course, we all know that some security exploits, the— I'm not sure what you call it— the human factors hacking, you know, where you're— what do you call that?
That is all about understanding people's emotions and getting people to feel they need to make a decision in a hurry or getting people to feel really comfortable.
Manipulating people's emotions is a great way to get them to do something that they will later regret.
And Tim, you're obviously a very, very smart guy. And I know that just from listening to More or Less and being a diehard fan. So, I have a dilemma for us all to noodle on.
As this butthole of a year nears a close, we are all looking at 2021 with, I don't know, I'd say, for me, incredible hope.
I don't know if you guys have some diehard wishes for the next year that you're kind of praying come true.
And so Carole has very kindly lent me her car.
So these are people that have never existed in real life. And so question number one, are these deepfakes?
Because it's not of a real person and duping people into pretending that they've said something that they haven't, but it's the image of a person.
So, I mean, I just think whether it's trying to pretend to be Geoff Goldblum or it's just a pretty face selling bitcoin or perfume, the idea is that you identify with that person, right?
You kind of— that person's helping you believe something or buy something or do something. They're often used by organizations to help us to, you know, get things done.
And so one question is, you know, how is that really different from hire an actor to, you know, sell your chocolate bar or sell your newspapers?
Is this worse by using these non-people people?
So just the fact that you can mass-produce these images that seem to be people is, I'm sure, something that can be worked out.
So quote, on the website Generated Photos, you can buy a unique worry-free fake person for $2.99 or 1,000 people for $1,000.
If you just need a couple of fake people for a character in a video game or to make your company website appear more diverse, you can get their photos for free on thispersondoesnotexist.com.
Hey, and if you want that your fake person animated, a company called Rosebud AI can do that and even make them talk.
But we don't actually have them, so we're just going to fake them." So this is—
So if you click on that link—
So you can change genders, you can change race and ethnicity, you can change a person's perspective, where they're looking in the picture, a mood, their age, their eyes.
It's shocking.
So for example, there's a guy I'm looking at who looks very convincing, except that one hinge on his spectacles is different from the hinge on the other side of his spectacles.
And there's a lady with two odd earrings. And it's that sort of thing that—
Then it might be quite— my eyebrows slightly less bushy. That'd be quite— it would be quite a fun thing to do. And oh my goodness, I can change my gender. Look at that.
But so where it's unclear what's real and what's fake, the fact that people are simply aware that there's misinformation floating around actually benefits those that create and spread fake information.
So let's say I was talking to someone and they were saying that the royal family were blood-drinking, flesh-eating, shape-shifting extraterrestrial reptilian things in human form.
The liar's dividend says that I'm actually attributing more credibility by simply being aware of the concept of this lizard elite conspiracy theory.
We can't believe our eyes. We don't know if those are real people or not real people.
But once it's been said, then it becomes a little bit more believable, or similar conspiracy theories might be believable.
That's the sort of principle of what you're saying, the liar's dividend.
So you think about the Access Hollywood tape that came out just before the 2016 presidential election and doomed Donald Trump's chances of getting the presidency, remember?
But if that came out now, Trump would just be able to say, 'That's not my voice on the tape, it's fake news.' At the time that, you know, there wasn't enough currency around the idea that you could fake an audio recording.
I mean, you can fake an audio recording, you can fake photos.
But it's not so much, "Oh, people will be fooled by these deepfakes." It's the idea that people won't believe things that they should believe. Exactly.
Because the deepfakes create deniability.
And there is, even before we get to Van Meegeren in my book, the introduction of the book talks about a very famous statistical book called How to Lie with Statistics, probably the most famous book about statistics ever written.
And it's a very witty kind of debunking of all kinds of statistical misinformation and all the different ways that people will fool you.
The argument I make is actually this might not be that helpful, even though everything this guy Daryl Huff, the author of this book, even though everything he's saying is correct.
The fact that all the emphasis is on misinformation and there's no acknowledgement that you might use statistics to actually figure something out or tell something true about the world, that's corrosive.
And in fact, Daryl Huff ended up using stories from his book to shill for big tobacco and to try to attack the epidemiologists who were arguing that smoking is quite likely to give you lung cancer.
And he deployed the same ideas in his book to say, well, you know, you can't really believe all this kind of— all these medical statisticians. We've had enough of experts.
Took us to a very, very dark place. And I think the deepfakes are a similar thing. It's not we'll believe stuff we shouldn't. It's that we'll refuse to believe stuff that we should.
So there's experts like you, Graham, and you, Tim, and academics and technologists and journalists all around the world that have been advocating that the general public learn about misinformation and deepfakes to make sure that they're forearmed or better armed against malicious use of these types of communications.
But have we all been duped, right? Could it be that the more that we talk about it, the more validity we give to nonsense because we're basically saying it exists?
Whoever believes more is the winner of whatever said argument.
They have to go, hang on, what's going on behind this? And ask a few extra questions, get a second opinion.
And if we're not really interested enough in the world to do that, then we've got problems.
I have seen people that I would say categorically are very sound mind, sound reasoning people. And when they're caught by the bug, it is really hard.
Like, I mean, they don't even, you know, like when they'll show me something and I'll go and just do a tiny bit of Googling, I can find debunking immediately.
And these are smart people that I think under normal circumstances would go and double-check.
But somehow there's been some pre— like maybe the person who said it has been pre-vetted by them as someone worthy of trust or something.
There's something weird that happens, but it's very frightening. I've seen it in my own circles and it's shocking. And it may be happening to me. That's the other thing.
Like, how do I know? I'm an emotional being.
Well, I just think I just need to believe more.
For security training to actually work, you'd have to find out what each person in the company is doing that's risky, send them phishing emails, monitor logs, check for passwords and have I been pwned, and then you'd have to train them in a way that doesn't send them to sleep, try and track what they're doing to see if it worked.
Who's got time for any of that?
They make this amazing software that plugs into your company, runs your phishing campaigns, integrates with Slack, tests if your users accept phony MFA requests, that's a biggie, and pulls in tons of other behavioral metrics from your existing apps.
It basically figures out what everyone needs to know and then creates personalized training that is not boring. And it even checks that it's working and it's all done automagically.
And they've got a deal just for our listeners. Sign up at culture.ai/smashing and your first 50 employees are free for life.
In fact, tens of thousands of companies rely upon LastPass to protect themselves.
LastPass Enterprise simplifies password management for companies of all sizes and helps you secure your workforce. So whatever the size of your business, go and check it out.
Go and visit lastpass.com/smashing to find out more. And thanks to LastPass for supporting the show. And welcome back. Can you join us on our favorite part of the show?
The part of the show that we like to call Pick of the Week.
Could be a funny story, a book that they've read, a TV show, a movie, a record, a podcast, a website, or an app. Whatever they wish.
It doesn't have to be security-related necessarily.
I discovered that the Ravensbourne University in London, who I think are based in Greenwich, they have done something rather remarkable, in coordination with the BBC, and they have created something called the BBC Motion Graphics Archive.
And you may be wondering, well, what is the BBC Motion Graphics Archive?
The thing which seems to connect all of these title sequences is that they largely involve some sort of graphical element. So I've done a quick perusal.
And there's some marvelous old things, things which you won't find up on YouTube, but you're able to download.
So it's all kinds of shows I looked up. I found Emu Broadcasting Company from back in the '70s. I enjoyed that. I, Claudius, one of my favorites.
Discovering Portuguese wasn't a show I watched regularly, but I was interested in things I read about the thinking behind it.
Who cares what the viewership is? Let's just try, you know, more anti-bloomers, for example, is another one.
And I love the fact that these have been preserved and now they're available digitally to everybody.
And I have to say, although I don't think it's up on this archive, the original 1963 Doctor Who title sequence, which was done in a remarkable way through a howl-around technique of having a camera pointing at its own monitor and basically picking up the feedback and the weird distortion, I think that was a remarkable title sequence for way, way back then.
So I have to say Doctor Who, but there's some other crackers. I'll tell you what I found wasn't in there though, was Willow the Wisp. I just did a quick look and not to be found.
Oh dear. Very disappointing.
I also like David Allen's productivity bible Getting Things Done, and so I was interested to see Cal reflecting on GTD and in The New Yorker of all places.
And so there's lots to enjoy about the piece, but what he really gets you thinking about is at what point did it become your problem, my problem to be productive versus a sort of systemic problem.
In manufacturing, being productive was regarded as a system thing. Like, a factory has to be productive, a production line, an assembly line has to be productive.
We need to get our processes all sorted. And Cal says the same is true for programming.
But for a lot of knowledge work, it's all just, well, you know, everything goes to email and we'll figure it out. And it's all very ad hoc.
And if that feels very stressful and everyone feels overwhelmed, that's an individual problem to sort out rather than a system problem.
That's what he's questioning and getting us to try and rethink.
You know, it was always, what are you doing? What are you doing now? So I don't know if that was of the time in the '70s, '80s, and it's just kind of come through a generation.
But it's certainly true.
But the problem with email though is, you know, the answer to the question, what are you doing now, could always be, well, I'm just going to do some email.
There's always more email. And maybe that's not really a very good way of getting stuff done.
I have a sort of automatic posting of stuff from my blog will go to a Facebook page, but I don't have anything to do with that. And Twitter, I'm on less than people might think.
So I will pop up and I will put some links to my articles and various other things, and I'll disappear again.
I don't find it— I mean, I've got about 160,000, 170,000 Twitter followers.
But do I want to? I think I probably want to do some real work.
That's how I'm going to unwind is watching those title sequences.
But as some of you know, I host other podcasts, one of them being the brand spanking new Sticky Pickles, a hilarious weekly podcast.
How many weeks have you promoted Sticky Pickles on Smashing Security? It's been 8 weeks, okay?
And the whole idea is that, you know, each host drops a tangle of a situation and we try to wiggle out and find the best course of action.
Anyway, so what do I do now, right? Do I stop and let it float away into nothingness, or do I scramble like a little bug and get a shit-hot replacement? And I got someone amazing.
So Smashing Security favorite Maria Farmazis is my pick of the week this week.
She's agreed, she has agreed to come in and be a co-host with me for some of the sticky picks, Picking Stickles.
And I just edited it and it sounds awesome, so check it out.
He's allergic to traditional awareness training and has a passion for finding new ways to empower people and keep their organizations secure.
Now, I've been living security awareness for donkey's years, and I am so thrilled that you're here because you might be able to give me a fresh perspective on things.
So let's go back, let's talk about you first. So what led you to actually start Culture AI? SPEAKER_03. I started life as a pentester, right?
And I think every pentester goes through this journey of realization that, you know, you start out testing web apps and then mobile apps, and then you do a bit of social engineering, and then you land your first red team job and you get in and you think, oh, that's amazing, I've got in.
And then you do the next red team job and you get in again.
Okay, got you. SPEAKER_03. Yeah, exactly, exactly, exactly. And I think you do your 10th or your 20th or your 30th and you kind of go, well, I get in every time. This is insane.
And I think it doesn't matter how many blinky boxes are either getting stored on networks.
I've had so many clients go, oh, we think you'll get captured or caught because we've got this blinky box. And it never quite works out like that.
Sometimes that slows us down a little bit.
You must have a good one you can share with us. SPEAKER_03. Well, so I've got a good one which I'm going to get killed for bringing up.
I stood up live in front of an audience and kind of said, look, every time I do a red team, it's human behavior that lets me into an organization, typically phishing.
And it's normally something that people do that let me move around that network.
So I'm that confident that people typically fall for things like email phishing that I'm going to stand up in front of everybody and phished my own mother, which, you know, I think it's a little bit taboo.
I think doing it probably wasn't the best maneuver, and it certainly damaged our relationships for a little bit of time.
But we made it look like it had come from something to do with her work rather, and she fell for it.
There was this really awkward moment actually when we launched the attack because I had the stats up live on screen behind me. And for the first minute and a half, nothing happened.
And we sent it to about, I think, about 15 people inside her company, including her, and nothing happened for about a minute and a half.
So I stood there thinking, oh my God, panicking. What happens if nothing happens? Anyway, she fell for it. A few other people fell for it.
The worst part of that for me was actually not the fact she fell for it.
We captured her password as part of the attack and we masked the password on screen so we couldn't see what it was. And everybody wanted to know what the password was.
And I'm just stood there thinking, I've got my mum's password. Do I really want to see this? It could be something terrible.
Anyway, we revealed it and it was worse than anything. I think it was either password exclamation mark or password one. It was one of the two.
I know it was so bad, but I remember the conversation with her off the back of it. And I said, I was talking to her about it.
She goes, well, what I don't think you understand, James, about this security thing is hackers would never guess that.
And I said, what do you mean they're never going to guess password exclamation mark. She's like, well, it had a capital P.
I was like, oh my God, this is not how— it's not how the world works, mother.
OK, so what led you to Culture AI then from that exciting life?
A lot of end users that aren't exposed to the security world think similarly, and rightly so. So I said, well, I want to try and fix that.
So I said, well, why don't we start doing simulated phishing attacks against people? So I founded a company called Phished. But I did that for between 2014 and 2018.
And we saw a lot of success with what we were doing, right?
I think the biggest insight that we got from that was that where we were able to personalize the education that we were sending and the campaigns we sent to people, to those people as individuals, we got really good results at changing behavior.
And I've always said that people all behave differently for different reasons, right? The reason that somebody clicks on a phishing email will differ between people.
Some people, it'd be an awareness thing, or some people would be an attitude thing. And you can break that down further.
That really frustrated me with Phished, that we got good results, but we were only focused on email phishing and we didn't collect a huge amount of data around why people were behaving the way they were behaving.
So we couldn't really, we couldn't tailor things enough to users. So we sold Phished in 2018 to F-Secure, right?
And I said, well, we're at a time where there's a lot of companies out there that are investing quite heavily in cloud.
So there's lots of different apps that are being used as well as existing infrastructure. A lot more companies are open to this concept of doing attack simulations.
So I said, well, can we not just build something that aggregates all this data from lots of different sources and turns that into some kind of almost behavioral insight?
So what are the things that our employees are doing way beyond just email phishing that are putting the company at risk?
Can we use that data and that info to try and change behavior and deliver, you know, let's forget this generic security awareness training rubbish that everybody's been doing.
Can we actually start to personalize training and nudges and content and deliver it down different channels and things? And the answer was, yeah, we can.
And we looked at it and said, why has this not been done before? But I think the problem is everybody's just been wanting to push out easy, boring, generic awareness training.
And then everybody's frustrated when they don't get good results with it, and they go, well, this awareness training stuff's a load of rubbish, which, you know, it is.
So yeah, that's where we went with Culture AI. We tried to do something a little bit different, I guess.
And then we hear the opposite, which is people saying we're trying to turn humans into the human firewall.
I think somewhere between the two but further along to the human firewall side, right?
I think human firewall is a bit of a weird phrase and it puts an unrealistic expectation on users.
But I think what organizations need to do is say, well, they're people, let's treat them as people. Let's see how we can support them.
And just because they've clicked a link on a phishing email, it doesn't mean we should immediately fire them. We should look at, well, how do we support them and help them?
And you might have a user that's really good at spotting phishing emails, but they set weak passwords or they post stuff online that is quite sensitive or they allow tailgating.
There's lots of different behaviors that people struggle with.
And for me, it's about supporting and empowering those users rather than almost damaging their relationship with the security teams by shouting at them.
That's not what security should be about. It's always, you speak to a lot of CISOs and they always say they want to come across as enabling the business.
And I think that historically, a lot of security teams have come across as blockers.
And one of those reasons is people are scared of them, especially when they're doing simulated phishing campaigns and things like that.
But email phishing is definitely something that the company should be measuring and should be improving. SMS phishing is another one there.
You know, they're two pretty easy ones to measure.
Another one though that's more recent, and we're actually about to do a white paper on this because some of the results we've got are very, very interesting, is multifactor authentication.
So we've found, and we've recently put in the functionality in the platform to issue things like push notifications to imitate, you know, if somebody signs into an application, they get a push notification to say, did you sign into this?
And we found that over half of the people that we've tested with that have accepted. They've just gone, okay, I'm used to seeing this.
I'm going to hit Accept, which completely negates the use of multifactor authentication because if a real attacker did it, the user would go, okay, well, yeah, I just accept and let the attacker in.
Which is really scary.
I'm the principal owner of the email account. I had my phone with me, so I assumed he was doing it and then pressed okay and let him through.
And then suddenly I thought, my God, what if it wasn't him?
Now, I called him and it turned out to be him, but I literally just went through it because I made it make sense in my head without double-checking.
System 2 is typically where you stop and think about something, and system 1 is kind of autonomous.
And it's essentially when somebody clicks on a link in a phishing email, that's normally system 1 behavior that's causing that.
And it's a very similar thing because you immediately get the notification and you're just so used to going, okay, accept. You don't stop and think.
And actually, when the team at InsightCulture AI built this into the platform, the first person they targeted with it was me.
And I didn't know it was coming up, I'm not going to lie.
And the only reason I spotted it was I was actually coming out of the gym at the time, which is a small miracle because I'm very rarely near the gym.
So I spotted it coming out of the gym and I thought, that's really weird because it's for VPN and I'm not near my laptop. That's very strange.
And that's the only reason I spotted it. And I think when we started to test clients with this, we're seeing similar stories.
So that's the kind of stuff that we're setting out to measure. I think a lot of organizations should definitely focus on MFA. Because I just think there's some hidden stats there.
But a lot of companies are looking at MFA at the moment and going, oh, this could be not the silver bullet, but it will have a big impact in terms of reducing the effect of phishing.
And I suspect maybe it doesn't have quite as big an effect as a lot of places are hoping. So that's a big one.
So today, the day that we published this show, it was Thanksgiving in the United States, and Christmas is just around the corner for many of us. So any tips for us users?
So we see quite a lot of users will get targeted by attacks or emails that will say your shipping for such and such gift has been delayed, or your Amazon order requires you to update your payment details.
Or attackers know people are expecting deliveries around this time of year, and they really look to exploit that. So there's one big tip that we can give this time of year.
It's to watch out for emails that you may even feel like you were expecting, and just double-check them.
Make sure that it is Amazon or it is the other website you've ordered off, and they're sending you that email.
Look at the link really carefully, and again, don't just click without thinking. I think that's really important.
And now your company name is Culture AI, and AI as a term in our industry, at least, is sometimes causing a little bit of confusion because people are going, well, actually, there is no AI, and AI doesn't exist, and it's really just algorithms.
And what do you think about that? What are your thoughts on actually using that name inside your company name? SPEAKER_03. Yeah, I think it's a really good one.
And to an extent, maybe we don't regret putting AI in our name, but I think there's a real risk that people just go, are they using it as a buzzword?
Because I think that happens so much. For us, the phrase AI is not about 100% replication of a human mind inside of a computer.
It's about the ability to make very, very good predictions based on data.
We use machine learning to basically try and make predictions around how and why people are behaving the way they're behaving so that we can work out what the best type of training and the best messages are to give to that individual user at scale.
The AI side for us is machine learning.
It's using machine learning to make predictions based on the data we're getting, and those predictions allow us to, to a reasonably high degree of accuracy, predict how a user's likely to behave based on data we've got about them and why they're doing it so that we can tailor training better than we could if we were just using a traditional kind of if-else statement.
I'm excited to see how this can change the landscape because people often complain about security awareness training and being able to tailor it might make it a heck of a lot more useful and interesting to people because they feel that it's actually talking their language.
So, anyone who would like to learn more about Culture AI, they've actually created a whole page just for Smashing Security listeners. So you can see that at culture.ai/smashing.
Plus, they have a deal just for Smashing Security listeners. Sign up at culture.ai/smashing and get your first 50 employees for free for life. Can't beat that.
James Moore, everybody, CEO and founder of Culture AI. Thanks so much for coming on the show. SPEAKER_03. Fantastic.
I'm sure lots of our listeners would love to follow you online or find out more about your new book.
And don't forget, if you want to be sure never to miss another episode, subscribe in your favorite podcast app such as Apple Podcasts, Spotify, or Pocket Casts.
Of course, high five to this week's Smashing Security sponsors, Culture AI and LastPass. And of course, huge thank yous to our Patreon supporters.
Your support makes Smashing Security free for all. Check out smashingsecurity.com for past episodes, sponsorship details, and information on how to get in touch with us.
Hosts:
Graham Cluley:
Carole Theriault:
Guest:
Tim Harford – @TimHarford
Show notes:
- How To Make The World Add Up — Tim Harford.
- Computerized canines to join Team Tyndall — Tyndall Air Force Base.
- Computerized canines semi-autonomous robot dogs into their patrolling regimen to join Team Tyndall — YouTube.
- Incredible Tyndall 'Robot Dogs' Demonstration — YouTube.
- Perimeter-patrolling 'robo-dogs' coming to Tyndall Air Force Base — YouTube.
- Revolutionizing Legged Robots — Ghost Robotics.
- Immersive Wisdom.
- Norwegian oil company employs robot dogs to patrol dangerous areas — Metro News.
- Japanese farm town deploys 'Monster Wolf' robots to scare off wild bears from neighborhoods — ABC7 San Francisco.
- Willo the Wisp — Wikipedia.
- Willo the Wisp: "The Thoughts of Moog" — YouTube.
- How Mediocre Dutch Artist Cast 'The Forger's Spell' — NPR.
- Do These A.I.-Created Fake People Look Real to You? — The New York Times.
- The Liar's Dividend — Definition from Macmillan Dictionary.
- BBC Motion Graphic archive — Ravensbourne University London.
- Emu's Broadcasting Company (1978) — BBC Motion Graphics archive.
- Discovering Portuguese (1987) — BBC Motion Graphics archive.
- I Claudius (1976) — BBC Motion Graphics archive.
- The Rise and Fall of Getting Things Done — The New Yorker.
- Sticky Pickles.
- Smashing Security merchandise (t-shirts, mugs, stickers and stuff)
- Support us on Patreon!
LastPass Enterprise makes password security effortless for your organization.
LastPass Enterprise simplifies password management for companies of every size, with the right tools to secure your business with centralized control of employee passwords and apps.
But, LastPass isn’t just for enterprises, it’s an equally great solution for business teams, families and single users.
Go to lastpass.com/smashing to see why LastPass is the trusted enterprise password manager of over 33 thousand businesses.
CultureAI isn’t just another security awareness training provider. It helps you measure and improve every end-user’s cyber security behaviour, providing a management system for IT, Security and Awareness teams.
Learn more and try it for yourself at culture.ai/smashing
Visit culture.ai/smashing now.
Follow the show:
Follow the show on Bluesky at @smashingsecurity.com, on the Smashing Security subreddit, or visit our website for more episodes.
Remember: Subscribe on Apple Podcasts, Spotify, or your favourite podcast app, to catch all of the episodes as they go live. Thanks for listening!
Warning: This podcast may contain nuts, adult themes, and rude language.


