
The iPhone security setting that you should enable right now, the worrying way that AI is predicting what criminals look like, and we play a game of face fake or real…
All this and much more is discussed in the latest edition of the award-winning “Smashing Security” podcast by cybersecurity veterans Graham Cluley and Carole Theriault, joined this week by Mark Stockley.
Warning: This podcast may contain nuts, adult themes, and rude language.
Show full transcript ▼
This transcript was generated automatically, probably contains mistakes, and has not been manually verified.
Hello, hello, and welcome to Smashing Security episode 357. My name's Graham Cluley.
Now, coming up in today's show, Graham, what do you got?
But in London, take heed if you plan to take your smartphone, because someone in London has their iPhone stolen every 6 minutes.
And that is the topic of my discussion today, is what thieves do to both steal your iPhone and what they can do afterwards, and maybe how you can better protect yourself into the future as well.
So there was a video which came out by the Wall Street Journal just before Christmas, where they interviewed an iPhone thief in his prison cell.
He'd been sent away for something, I don't know, 7 or 8 years or something for his part in an iPhone theft gang.
He exploited a vulnerability in Apple's software the same vulnerability I've been investigating for the last year.
Well, what he did was he went out with a couple of his mates and they go out and hang out in bars where there were sort of young drunken people, you know, people who were partying, people having a good time, maybe slightly inebriated already.
It turns out he didn't actually have any drugs at all, but he's sort of pretending, right? Saying, oh yeah, I've got a bit of Shatner's bassoon.
He says, 'Why don't you take down my details?' And so the other person gets his phone out to write down his details, and he's got complicated details.
He says, 'Let me have your phone. Let me type it in for you,' right?
And of course, this inebriated person, this student who's having a great time, says, 'Sure,' maybe unlocks his phone, hands it over, right?
And after he's handed it over, the guy who's planning to steal it says, 'Oh, it's locked.' So he just quickly locks the phone. He says, 'Locked?
What's your passcode?' And people desperate for a bit of Colombian blacktail. Actually, I think that's a type of free-range egg, isn't it? Rather than a—
And people might just hand it over, and they may take back the phone and type it. But at that point, he watches them enter it.
Or, of course, people just say, "Oh, it's 2264813," or something like that, right? And so they enter their passcode. So, the villain now knows the passcode, and he knows what it is.
They're in a bar. The phone user's been drinking, already drunk. They're having a bit of a chitchat, and they're distracting him.
And at an opportune moment, the thief just passes the phone to one of his mates, and voomf, it's gone.
"I put it down here. Where's it gone?" You know, create a distraction. You make yourself scarce. It's a hubbling, bubbling kind of place.
Anyway, immediately after the phone is stolen, what these guys do is they reset the passcode and they turn off Find My iPhone.
So that means that the genuine owner of the phone can't remotely track it or erase the device. And to do this, all you need is the passcode to do this.
The real owner no longer has any access to his phone. The next thing which the thieves do is they replace the real owner's face from Face ID and replace it with their own.
It's really opening things that people thought were safe. Like savings, check-ins, cryptocurrency apps, Venmo, PayPal. Yeah, you don't need face for none of that.
That's kind of little money. I'm trying to take as much as I can.
What this thief says is there are some things where maybe the face isn't enough to unlock it, but quite typically codes and passwords would be stored unprotected inside users' Notes app.
So they would just keep in plain text, in their Notes app, the one which regularly comes with the iPhone, a password, a passcode, something to unlock some account would often be there.
The other thing which people do is sometimes they store those kind of passwords in their photos.
So they take a photograph of something, think, oh yeah, well, I'll put it in this folder and people won't look to look there.
But the thieves do look in those kind of places in order to find this stuff. But now, now they've unlocked your phone. Now they have control over your phone.
They can buy stuff, they can do stuff with Apple Pay, and they can ultimately, after they've caused their shenanigans, after they've logged into your bank account or done other things, which they're now able to do because they've set their biometrics up on your phone, they may ultimately wipe your iPhone, sell it to someone else, which makes them $900.
Is that you unlocked the phone and then with that information, which you've either given this guy who you innocently thought was going to sell you drugs or maybe some lovely free-range eggs, has instead actually scarpered off with it and now has access to your online accounts.
Well, about a week or so ago, a new version of the iPhone operating system came out called iOS 17.3, and it has a new security feature that many people might benefit from.
And this is why, Carole, I've told you to get your iPhone out, because Apple has not turned this on by default, and I'm recommending that everyone who has an iPhone turns this on because I think this is a good security feature.
It is called Stolen Device Protection, and what it does is it requires you to use Face ID, Touch ID, you know, some form of biometrics to unlock all the kinds of settings on your phone rather than your passcode.
So your passcode won't be enough. And this is specifically for when your phone is away from your workplace or your home.
So your iPhone has a way of learning, this is where he goes to work, this is where he is at home, by the regularity, I guess, of where you are.
It won't be enough to access passwords or passkeys saved in Keychain. It won't let you look at payment methods stored in Safari Autofill.
The crooks won't be able to add their own face to Face ID. They won't be able to add their fingerprints as they don't have your existing biometrics.
So they would have to steal you and maybe chop off your finger as well or something like that in order to unlock your phone or take your eyeball, I suppose.
There's definitely something that's trying to work out if you're alive.
So unlike now, you or your thief won't be able to fall back to the passcode entry to make those changes unless you're at your home or you're at your workplace.
And of course, hopefully, Carole, you are not losing your phone to thieves inside your home.
So this isn't— right, so for everybody else.
You should hopefully have already done that for now, and you can turn on Stolen Device Protection in Settings.
So you go into Settings, you tap Face ID and Passcode, you go into that submenu, you'll have to enter your device passcode, and then simply toggle Stolen Device Protection on.
Somebody gets your iPhone, it's basically locked.
You know, they've got 10 chances to unlock it, which they're not going to manage to do, and they can't get stuff out of it because it's all encrypted.
So your iPhone's already pretty safe. It's only for that fairly narrow situation where somebody managed to get your passcode as well.
What's really interesting about this to me is that there are now settings where the biometrics are the gatekeepers and not the passcode.
So for the whole time that we've been dealing with biometrics, it's always been the case that the biometric is backed up by a passcode or some sort of entry of a code of some kind.
A passcode is a yes/no answer. You either get the passcode right or you get it wrong.
But a biometric is a kind of "Okay, we think it's you, there's a high chance it's you." So it's a very different kind of assessment.
And we just didn't know 10 years ago, 12 years ago, we just didn't know how effective biometrics were going to be or how reliable they were going to be.
And Apple have now had biometrics on phones for at least a decade, starting with Touch ID. And they sell billions of these things.
So there are millions and millions and millions, hundreds of millions of phones out there that people have been using with biometrics for a decade.
So Apple must have an enormous amount of data about how effective biometrics are.
And they've taken this step now, which is the first time I can remember anybody doing it, of saying, actually, the passcode is backed up by the biometric rather than the biometric being backed up by the passcode.
And that, I think, is the thin end of a wedge that leads to passcodeless authentication.
Because it's saying we trust the biometric even more than we trust the passcode, which they would have good reason to determine that they can, then it seems to me that this could easily be a prototype for, all right, okay, well, we're going to let the biometric be the way that you access other sensitive things.
And ultimately, that leads to not having a passcode on the phone at all. And I think we spent enough time with biometrics now to know that actually they're pretty reliable.
And that could be where this goes.
These things, there's so much more going on than it just looking at — I can't just hold up a photo of Graham and log into his phone with Face ID.
It's actually trying to work out that you're a real person. And the same with the finger.
But certainly for our iPhone-loving listeners out there, I think this may be a sensible setting for them to turn on, particularly that guy in London who keeps losing his phone every 6 minutes.
Mark, what have you got for us this week?
And perhaps more often than both of those, are we actually allowed to be in charge of this stuff? Who left us in charge?
I mean, when you think about modern technology, it's the sum of thousands and thousands of years of this accumulated science and engineering, all this human endeavor, and it's each generation learning from the previous generation.
Nobody's just inventing things from scratch. For most of our history, flint knapping was the absolute pinnacle of technology, you know, for millions of years.
And then it took a very, very long time for us to get to things like pencils and chairs.
So we had all this time to kind of get used to this newfangled technology and figure out how to use it and what was safe and all this kind of thing.
That seems a concern.
Yeah, so it used to be that we have plenty of time to keep up with these innovations.
But now these days, I'm often left with this kind of nagging sense that modern technology has far exceeded our individual competence.
I have a fairly rudimentary car, and every time I sit in my car, I think, I know that my level of driving is not up to the level of engineering of my car.
You know, I barely kind of scratch the surface of what my laptop can do. And I just kind of feel we're not designed for Facebook and nuclear weapons and things like that.
I just feel like we're kids that have been left at home by our parents with 200 cigarettes and some heavy machinery or something like that.
So anyway, you can see where I'm going with all of this. So today I want to play a game called, have we gone too far?
And I want to start by introducing you to a company called Parabon NanoLabs.
You know, anything that exists when you're born is oxygen and everything else is just terrifying and needs to be burned. So yeah, so Parabon—
And one of the things it does is it helps the police by linking DNA and genealogical data to help solve cold cases.
So, in 2019, an 82-year-old handyman was arrested for a rape and a double murder that he had committed 43 years earlier, thanks to Parabon NanoLabs.
So, the police were able to track him down after the labs uploaded some DNA from the crime scene to a public genealogy website called GEDmatch, which does genealogy and family trees.
And this established a family link to the Green Bay area in Wisconsin. And police zeroed in on the area, and they got a DNA sample. This is quite fun, this.
They got a DNA sample from the suspect by asking him to fill out a policing survey. And then he had to put the policing survey in an envelope, and lick on the—
And one of the things it can do is it will produce what it calls a snapshot phenotype report, which tells you what you can learn about someone's appearance from their DNA.
And just to clarify, Parabon NanoLabs was helping them. It didn't help them solve the case. It was just helping them—
They've got fair skin, they don't have freckles, they've got brown eyes, they've got brown hair, they've got bushy eyebrows. That sounds pretty useful, right?
So the good thing is now the police have an idea about what this person probably looks like.
Or at least they've got an idea what an AI thinks this person looks like, and that might be useful, right?
You know, if it was beyond reasonable doubt and say, "Yes, we are absolutely certain this isn't someone who's blue-eyed, but someone who's brown-eyed," then that would be helpful, I suppose.
So, the AI doesn't just produce a description of the person. It also creates a 3D render of their face, so you can actually see what the AI is guessing the person looks like.
And then they added some bits too, because DNA can't tell you about things like hairstyles and things like that, so the lab had a forensic artist add in a haircut and a moustache.
And the company, to your earlier point, the company produced two versions — they produced one of the guy aged 25, and another one of the guy aged 55.
And the police actually published those faces in an attempt to jog the public's memory.
And I should mention as well, to be fair, that the police are well aware that these might not be accurate.
So in the — I read a local news report from the local paper, and they had actually told the local papers at the East Bay Times that the composites were scientific approximations and not likely to be exact replicas.
And of course, environmental factors like smoking and drinking and diet and other things can't be predicted by DNA.
Although I think even that's an interesting question, because I can imagine that in future we might try to make predictions like that.
You know, based on your DNA, you have a propensity to addiction, and therefore you probably eat too many pizzas. Yeah, maybe you're overweight.
And I can see a future where we do actually try and make those predictions — I think that's where this slippery slope leads. So where are you at now?
What do you think now — good thing, bad thing? How are you feeling about this technology?
So, I mean, it sounds science-based, right? Paraben Labs have tested this.
And they have tested this, and obviously they think that it works. And they're happy enough to sell this to the police. But it hasn't been peer-reviewed.
So maybe it's really accurate. And maybe it isn't.
Like sometimes these things have, depending on the training data, there can be kind of blind spots and weak spots and things like that. We just don't know.
And we shouldn't forget, again, to your earlier point, Graham, that this was all done by a machine learning model.
And machine learning models suffer from what we call the black box problem, which means that we don't actually know how they make decisions.
We know that if you feed in a certain type of input, you'll get a certain type of output, but we don't actually know what's going on under the hood to the point where we can say, okay, well, it decided that this person looks like this based on the DNA because it made these specific decisions.
So it's already looking pretty obscure, which doesn't mean it doesn't work, but it is a very opaque process.
Now, that said, I think it's important to remember that we trust witnesses and sketch artists.
And they come up with pictures of what suspects look like, and there's no science at work there at all.
All we can say is that in this instance, they've created these two renderings and they published them in the newspaper in an attempt to get people to come forward.
So presumably at that point then they would be using other evidence, perhaps like the DNA evidence that we were talking about earlier, which has got a little bit more rigor behind it.
But actually this story doesn't quite finish there because evidently the police didn't get any consequential leads from the renderings.
And in 2020, so 3 years later, a detective contacted the Northern California Regional Intelligence Center, which is a place that facilitates collaboration between different law enforcements, and I guess has access to some technology.
And they said, I've got a photo of a possible suspect, meaning this rendering, and we'd like to use facial recognition technology to identify a suspect or lead.
Now, so we don't know what happened next, because we only know about this, because it was part of a big data leak, and this happened to be one of the pieces of information.
This request happened to be one of the pieces of information. So we don't know what happened after this. We don't know if they ran the facial recognition, but we know they wanted to.
We don't know if it led to anything, but we also don't know how commonplace this is. Yeah, this is insane.
Of people taking an image generated by a computer based on someone's spit, and then someone else says, oh, well, I'll run that through the facial recognition database and see who we come up with.
Yeah. I don't know.
And if that, you know, so there's pros and cons to this method, I'm sure. But it is scary. Of course it's scary.
Remember, it's not just saying, oh, they've got brown hair. It's saying they look like this. Here is a picture of their face.
And then another AI takes that and makes a guess about who that resembles.
I mean, this at a time when we have governments all around the world, or sort of local municipalities and local governments and things, banning the use of facial recognition, which by itself has proved to be really problematic because of things like biases in the training data.
And there's an old saying, Graham will know this because he's even older than I am, but there's an old saying in computing, garbage in, garbage out.
And the concern is, I don't think— so it's actually against Parabon NanoLabs' terms of service to do this.
So just the police in the police report were fully aware that the rendering that was produced was not— it's not likely to be an exact replica of the person, it's there to jog someone's memory.
Parabon NanoLabs understand the limits of what they're producing, and so it's not supposed to be used for things like facial recognition.
The problem seems to be, according to Wired, certainly, which is where I discovered the story, that there are no federal rules that limit the types of images that police can use with face recognition software.
So it can use fake AI pictures. Seemingly, yes. And it's not just fake AI pictures.
So law enforcement agencies have used blurry surveillance camera shots, manipulated photos of suspects. The sketches made by artists have been run through photo recognition.
And my favorite one, they've even used a picture of Woody Harrelson because in one case, the suspect looked like Woody Harrelson. So they ran that through.
So, and this is what I was talking about at the beginning. You know, are we really, you know, are we allowed to use this stuff?
Some of them may be AI-generated. It's the AI show today. And you guys are going to tell me what you think. Okay, so fake or real?
Is this a real person or is this a fake headshot of somebody, someone completely made up? And you guys are pretty bright, right? You're pretty bright in all stuff digital.
So let's put your expert eyes to the test and see if you can identify if someone is real or fake. So we've got number 1.
And researchers found that the higher confidence correlated with a higher chance of being wrong. Oh. In other words, misguided with confidence.
It's a phenomenon called hyperrealism. Okay, this is according to The New York Times. And this hyperrealistic face idea, these faces tend to be less distinctive, researchers say.
And so they closely average out the proportions.
And because of that, they fail to arouse suspicions amongst participants because we seem to fixate on features that drift away from average proportions.
So if someone has a big hook nose, you'd be gotta be fake. Gotta be fake. Or a misshapen ear. Interesting, huh? Hmm. So takeaway one, don't believe anything you can see online again.
But as we know, it's not just imagery, it's also AI-generated text.
So just last September, a study led by two experts in applied linguistics conducted some research to see if their counterparts could tell the difference between a research abstract written by a student, a human, or a machine.
And this is not a bad idea, because if anybody is going to be able to identify human-produced writing, it should be an expert in linguistics, right? That's what they do.
They spend their careers studying patterns in language and other aspects of human communication. You'd think so.
And based on the larger findings— links in the show notes— the researchers concluded the professors would not be able to distinguish between a student's own writing and the writing generated by an AI-powered language such as ChatGPT without the help of software that hasn't yet been developed.
And maybe that's the key, right? Some authentication or defensive tools, some anti-AI.
There's probably some tools out there, but without the tools, basically research is saying we have no hope in hell.
And our little experiment here with the two of you showed that as well.
And because there are patterns, just like anything else, there are patterns in the way that it— like you say with the photographs, it's producing an average of all the text that it's read.
Composite. Yes, it's a composite of everything that's read. And so there are things within that that another AI can spot.
But then you are in the hands of, well, let's hope our AI works better than their AI.
Just this past week, there was a report from GCHQ's National Cybersecurity Centre's chief, Lyndsay Cameron, and she's warning that AI is going to make the digital landscape much harder to protect.
I think all three of us would agree with that.
She says, quote, "The emergence, use of AI in cyberattacks is evolutionary, not revolutionary, meaning that it enhances existing threats like ransomware but does not transform the risk landscape in the near term." What do you think about that?
I found a way to bypass these controls.
You would have to be a good enough programmer to write ransomware in order to put together the kind of fragments of nonsense that it spat out.
And then I repeated it 6 months later with ChatGPT-4. And let me tell you, ChatGPT-4 is really good at writing ransomware.
And there were some controls in place that were designed to stop you.
So you can't just rock up and say, write me some ransomware, because it'll go, hang on, I'm not allowed to do that.
But what I was able to do is just say, okay, well, what does ransomware do? Okay, it does X, Y, Z. So I said, right, write me a computer program that does X. And it said, fine.
And then I said, right, add Y. And then it, okay, fine, add Z. And I just added all of the common features that you find in ransomware. And then it made it.
And then I executed it on a virtual machine to make sure it worked. And it did.
So if you look at the actual ransomware executables, they haven't changed very much in several years because they do everything that the crooks need them to do.
So it's very unlikely that AI is going to come along.
And this is maybe we haven't seen an explosion of AI in cybersecurity, certainly not amongst cybercriminals, because at the moment they just don't seem to need it.
So ransomware does what it needs to do. So you're not going to get an AI come along and write a better ransomware because it's not going to get much better than it is.
But what it might do is lower the bar and allow people who couldn't otherwise get into the field to get in and actually write some piece of usable computer program.
And I can imagine that people could, you know, a gang could perhaps use AI to target many, many, many thousands more people at the same time and have their AIs producing the responses to the messages which this person who's fallen in love with them is giving them.
I can imagine that happening, maybe not so far down the track.
I don't even have good news at the end of this other than to say maybe it's high time we take a page of Socrates book according to Plato and basically say the true wise recognize that they know absolutely F-A.
And I'm becoming a serious wise-ass, guys.
Wouldn't it be great if a device which lacked compliance or lacked security was denied access to your organization's SaaS apps and other resources?
Because this would mean that the hackers who had nabbed the unlucky employee's credentials, for example, could not gain access to your assets. It would effectively lock them out.
Welcome to Kolide, a world where access is only given to approved secure devices. As the administrator, you can manage every operating system, even Linux, from a single dashboard.
Another bonus of Kolide: employees can often fix their own problems without involving IT support, meaning less resources are needed to effectively operate a more secure environment.
Kolide is the device trust solution for companies with Okta. Kolide ensures that if a device is not trusted or it's insecure, it is denied access to your cloud apps.
Learn more at kolide.com/smashing. That's k-o-l-i-d-e.com/smashing. And huge thank you to Kolide for sponsoring the show.
Expanding the scope of your security program with Vanta's market-leading compliance automation, saving your business time and money.
Vanta has over 5,000 customers around the globe who are saving over 300 hours in manual work and up to 85% of their costs for SOC 2, ISO 27001, HIPAA, GDPR, custom frameworks, and more.
And with Vanta's 200+ integrations, you can easily monitor and secure the tools your business relies on.
From the most in-demand frameworks to third-party risk management and security questionnaires, Vanta gives SaaS businesses of all sizes one place to manage risk and prove security in real time.
And as a special bonus, Smashing Security listeners can get a stonking 20% off Vanta. Just go to vanta.com/smashing to claim your discount. That's vanta.com/smashing.
And thanks to Vanta for supporting the show. And welcome back. And you join us at our favorite part of the show, the part of the show that we like to call Pick of the Week.
Could be a funny story, a book that they've read, a TV show, a movie, a record, a podcast, a website, or an app, whatever they like.
It doesn't have to be security-related necessarily. Better not be. Well, my Pick of the Week this week is security-related, and I'm not ashamed to say it.
And my choice this week comes about because I was contacted by a loyal listener to the show, Alan Liska, and he told me about a radio drama series which used to be on in the 1940s and early, all the way through to the early '60s, actually, called Yours Truly, Johnny Dollar.
Oh, never heard of it. It was the adventures, the adventures of a private insurance investigator with an action-packed expense account. That's how it was promoted.
That's what people did in the old days, folks, for their entertainment.
Well, the tales of this private insurance investigator, it's now all— the character's fallen into the public domain, which has meant that Alan and some of his buddies have been hard at work updating Johnny Dollar and bringing him to the present day.
And they have made a series of comic books where the private investigator is now a cybersecurity insurance investigator, still with an action-packed expense account.
I was surprised as I was reading it just how often he wrote down his expenses for taking cabs or buying a new hat or taking a receptionist out for lunch in order to get some information from her, that kind of thing.
So Alan has put this together. He is selling it on his website and they've also about to launch a Kickstarter for their third issue.
And I checked it out and I thought, oh, you know, this is a bit of fun and it's cybersecurity related, which I know we love to have our pick of the weeks cybersecurity related.
So I thought I'd give it a mention. So you can find it at JohnnyDollar.io if your interest has been piqued. And if you're a fan of comic books, that is where you should go.
And over the years, I have learned how to make food tastier than I could when I was younger. And so I spend a lot of— I use a lot of recipes from the web.
And I normally don't care where they come from. Normally, I'm just like, I'm going to make a thing, like I want to do some Japanese fish or something like that.
And I'll just Google it and a recipe will come up. And sometimes they are really good, and sometimes they are not really good.
And they always have that great long life story at the beginning, the SEO blurb that everybody has to put in to pad out there. Could surely everyone could just agree to scrap that?
Like, Google, what are you doing? Just like, if it's got recipe on the page, just ignore everything before that, please. I know, I agree, it's so painful.
And so over time, you kind of develop ideas of which websites are good and which websites are less good.
And so when a recipe comes up, when I'm looking for something and a recipe comes up on a site that I recognize, I go, oh, you know, maybe I'll pick that one because that's a good site.
But I don't have websites where I go specifically. I'll go to that website and I'll pick one of the recipes from that site because it's so good, with one exception.
And the exception is a website called I Heart Umami. And I stumbled upon it looking for a satay chicken recipe, and I liked it so much.
I was looking for a satay chicken I could do in an air fryer, and I liked it so much that I went back there and I started cooking other things that were on this website.
And I've done a number of dishes, like different salads and a number of dishes. All have been really, really good.
So if you want to come off as a better cook than you are, then I cannot recommend this website. Graham's like, check.
I cannot recommend this website highly enough because it will absolutely make your food taste like the best version of your food.
And it's also, it's all low carb, gluten free, which I don't care about that at all. I love gluten and I love carbs.
But, you know, if that's important to you, it's all that, and it's kind of pretty keto-friendly as well.
So it's a really, really lovely kind of protein-rich, flavorful dishes from the East.
I made the mistake— somebody bought me an Ottolenghi book for Christmas a couple of years ago, and I made the mistake— it was the simple book.
I made the mistake of cooking one thing from it, and it took me 2 hours and it had 20 ingredients, and I thought, this is not a simple recipe. This is not for me.
But yeah, no, it's really good, kind of hearty, soulful, fairly straightforward. But you can make it complicated. But yeah, it's great. Check it out. So I Heart Umami.
So this past weekend, you know, I had an audiobook humming away in the background 'cause I was super sick. I was stuck in bed.
And I had this audiobook playing, and then it's done, and I kind of heard it, but I didn't hear it completely. And, you know, I kind of want another one.
It just gets expensive or whatever. So I'm powering through these books, and I see the hit on the bank account. And that's what got me off my butt.
And I got down to the local library and joined the local library, which is kind of sad that I haven't done that before.
You guys are probably both members of libraries, probably because of kids.
And it's a beautiful building, you know, lovely knowledgeable people. But as part of your library access, you also have access to their audiobook selection for free.
I've been using the Libby app. I don't know if either of you've ever used it. No.
So it's tied with libraries and my friend, it seems to be an international library app 'cause I have a friend in California who uses the same app for his public library.
And the app is stable, it's easy to use, it's not too flashy. So from a usability point of view, I've been using it for about 2 months now. I think it's pretty solid.
And you can put holds on books, you can renew if you need to, you can return early, all that stuff.
The only thing that's a bit shitty is the search function because there's a lot of stuff in libraries that may not be for everyone.
Our library seems to have a huge fantasy romance section, which is not my area or my bag, right?
But it's difficult to remove them from searches, so they kind of crop up everywhere, basically. But that said, I'm loving it.
And I've just finished Last Night in Montreal by Emily St. John Mandel, an amazing book. And I've started a new one, a classic, The Bell by Iris Murdoch.
All brilliant lessons, all free, and I'm showing my support for my local library. So that is my pick of the week, libraries. Yeah, bravo.
For instance, if you go to your local library, you might find that they have special events for parents with young children.
So they might have a Lego club or, you know, rhyme time, you know, somewhere to dump your kids for an hour or so. You can get them involved and interested in library there.
And also, for people who struggle with technology, if you have relatives or if listeners have trouble with some of the computer issues, sometimes they have digital help sessions at a library as well, where they'll help you sort out your iPad or whatever it is that you're listening to the podcast on.
Make sure that you've got it all configured right.
And it's quite cool. So if you need a quiet space, think about it, libraries.
Well, that just about wraps up the show for this week. Mark, I'm sure lots of our listeners would love to follow you online, find out what you're up to.
What is the best way for folks to do that?
And don't forget to ensure you never miss another episode. Follow Smashing Security in your favorite podcast app, such as Apple Podcasts, Spotify, and Overcast.
For episode show notes, sponsorship info, guest list, and the entire back catalog of more than — 356 episodes, check out smashingsecurity.com. Until next time, cheerio.
Hosts:
Graham Cluley:
Carole Theriault:
Guest:
Mark Stockley:
Episode links:
- Mobile phone stolen every six minutes in London, says Met Police – BBC News.
- iPhone Thief Explains How He Breaks Into Your Phone – YouTube.
- About Stolen Device Protection for iPhone – Apple.
- Cops Used DNA to Predict a Suspect’s Face—and Tried to Run Facial Recognition on It – Wired.
- Will ChatGPT write ransomware? Yes – Malwarebytes.
- AI chatbots are making scams more convincing than ever, warn spy chiefs – The Telegraph.
- Test yourself: which faces were made by AI? – New York Times.
- AI vs. Human Writing: Experts Fooled Almost 62% of the Time– Neuroscience News.
- I know that I know nothing – Wikipedia.
- Yours truly, Johnny Dollar – Comic book.
- I Heart Umami.
- Libby.
- Smashing Security merchandise (t-shirts, mugs, stickers and stuff)
Sponsored by:
- Kolide – Kolide ensures that if your device isn’t secure it can’t access your cloud apps. It’s Device Trust for Okta. Watch the demo today!
- Vanta – Expand the scope of your security program with market-leading compliance automation… while saving time and money. Smashing Security listeners get 10% off!
Support the show:
You can help the podcast by telling your friends and colleagues about “Smashing Security”, and leaving us a review on Apple Podcasts or Podchaser.
Become a supporter via Patreon or Apple Podcasts for ad-free episodes and our early-release feed!
Follow us:
Follow the show on Bluesky at @smashingsecurity.com, or on Mastodon, on the Smashing Security subreddit, or visit our website for more episodes.
Thanks:
Theme tune: “Vinyl Memories” by Mikael Manvelyan.
Assorted sound effects: AudioBlocks.

As ALWAYS, another great episode! Thank you for such a great podcast!!