
The curious case of George Duke-Cohan, Huawei’s CFO finds herself in hot water, and the crazy world of mobile phone mental health apps.
All this and much more is discussed in the latest edition of the award-winning “Smashing Security” podcast by cybersecurity veterans Graham Cluley and Carole Theriault, joined this week by special guests Mikko Hyppönen from F-Secure and technology journalist Geoff White.
Show full transcript ▼
This transcript was generated automatically, probably contains mistakes, and has not been manually verified.
Phishing, Ransomware, Malware, Darknet, Ransomware, Malware, Ransomware, Ransomware, Ransomware, Hoaxes, Who Are We, and Chatbots with Carole Theriault and Graham Cluley.
Hello, hello, and welcome to Smashing Security episode 108. My name is Graham Cluley.
And my husband starts laughing because inside his card I wrote, to the man with the juiciest plums.
It's our pleasure to have Mikko Hypponen back. Hi, Mikko.
And I said, oh, Mikko Hypponen. And he goes, Mickey Hypno Man? So he sort of thinks you're some mesmerizing superhero, which isn't that far from the truth, really, is it?
I'm going to be talking about the strange case of George Duke Cohen, someone who it seems couldn't stop himself from getting into trouble.
And if it wasn't enough of a nightmare looking after your passwords on a personal level, imagine protecting every password inside your business. That's where LastPass comes in.
Every password is an entryway into your business. LastPass makes it easy to secure them all with centralized control.
You can get insight into employee password behavior and the power to change them from your admin dashboard. Find out more. Visit lastpass.com/smashingsecurity. And welcome back.
Now, I want you two chaps to imagine that you worked at the airport. Thank you for waiting, ladies and gentlemen. We invite you to— Oh, the glamour.
The person who helps people know if their plane is delayed or whether you can buy Toblerone in duty-free, those sort of important questions.
She said that there were a whole load of imposters and that they were being pushed to the back of the plane and one of them had a bomb. They had everybody at the back of the plane.
Just over one year ago, in October 2017, the website of a British college in Watford suffered a denial of service attack.
So it was one of their own students, a guy called George Duke Cohen, at the time was 18 years old.
And they identified that it was him, but they allowed him to stay on the course for who knows what reason.
An email bomb threat was received by the college, which understandably, they tend to take those sort of things seriously, just in case.
And 2,500 students and staff were evacuated from the college. And who do you think was responsible? George Duke Cohen.
Now, that time he got thrown out of the college and the police were called and they gave him a good talking to and said, don't be naughty ever again.
But soon after, bomb threats were emailed to over 1,700 schools and nurseries up and down the UK saying that explosives have been planted.
And the email said that unless $5,000 worth of cryptocurrency was moved into the account of a US-based Minecraft server, buildings would be blown up.
And basically they said, we're going to blow up everything unless the payment's made, right?
Well, now, the fact that they were saying put it into the account of a Minecraft server, it didn't mean that the Minecraft server were the people who were actually threatening to blow up the place.
That, of course, was something of a Joe Job. They were trying to make the authorities think that it was this Minecraft server because someone had a grudge against them.
Hundreds of schools were evacuated, and, well, who do you think was responsible for all of these email threats? George!
George Duke Cohen. And police were thinking, who's this strange chap who's calling us up? He's referring to these school threats and other things.
Well, you say about time, I'll just wait.
They found out that he was using the Twitter account of hacking gang and DDoS gang called Apophis Squad, who had targeted the likes of Brian Krebs and other websites as well with DDoS attacks.
But as the police carried on investigating, they released him on bail.
You need to put your school on lockdown. We are planning to kill every student in the room.
But it wasn't hard to work out who was behind it. I mean, Mikko's already ahead of us. He's worked out this is George.
But Apophis Squad, remember the Twitter account which he was connected with? They claimed responsibility on Twitter.
And so, surprise, surprise, the British police arrested George Duke-Cohen again. And they put him on bail again while their investigation continued.
And there the story ends, and nothing else bad— oh no, something else bad did happen, because it's at this point that that phone call happened to the airport.
My daughter just called me ten minutes ago crying on the phone saying that her flight was getting hijacked.
She said they were holding them hostage and that they were being pushed to the back of the plane and one of them had a bomb.
A British man calling himself Mike Sanchez rang up San Francisco International Airport claiming that he'd been contacted by his distressed daughter who was traveling on a United Airlines flight from Heathrow.
And according to the man, as we heard, his daughter basically believed the plane had been hijacked and a man was pointing a gun at them.
Now, the mobile phone number he gave was almost exactly but not quite the same one as his mum's. George Duke-Cohen's mum, and the email address belonged to Apophis Squad.
So back to your point, Carole, has he got a complete yogurt pot on his noodle, do you think?
But in the judge's view, that was no excuse for what he'd done.
They said, look, there's plenty of other people who suffer from autism and so forth who lead law-abiding lives, and what you did was just going too far.
He's now been jailed for 3 years, 1 for the school bomb hoaxes and 2 for the airline hoax for the enormous amount of disruption he caused.
So with, for instance, the airline hoax, it was basically dealt with a real terrorist incident and the flight was quarantined and security teams searched it and questioned passengers.
Very, very disruptive. And Apophis Squad were tweeting their joy.
Is it just for the lulz? Is that it?
Although initially it does appear that there was this link to this Minecraft server. So he'd fallen out with them.
And there have, of course, been series of DDoS attacks between different Minecraft server services and even the companies who are there to protect the Minecraft server as well.
And some of them are in sort of rabid competition with each other. And whereas in our day, you know, when we were youngsters, you know, if we were miffed—
But anyway, you know, if we were miffed with someone, there was only a fairly sort of local impact of us sort of giving each other a Chinese burn or something.
You know, that would be the extent of it.
It wouldn't cause such massive damage on the internet or involve innocent parties being disrupted or having their systems affected as a result.
But this was really weird because once we found the person, what had happened was that an insurance company was being targeted by a massively large denial of service attack.
And the person behind the attack was trying to retaliate because the insurance company hasn't paid him for the car he crashed.
And the piece of malware he wrote was called All Apple. And it was completely written in assembly. So we have an assembly—
The malware was still spreading and the attack was still going on. So, wow. Some of these things and some of these people have really weird motives for more of their attacks.
She was en route from Hong Kong to Mexico City as she was making a connecting flight and transmitting from one plane to another.
In Vancouver, she was arrested by Canadian officials, and now she's fighting extradition from Canada to USA.
And the reason given for the arrest is that Huawei, the company, has broken US sanctions against Iran.
And this immediately raises some questions because, you know, this is— she's Chinese. The company she works for is in China. China doesn't have sanctions on Iran. USA does.
So it is a bit complicated, how exactly does a Chinese person break US sanctions against a third country. Nevertheless, that's the case.
And there's been plenty of discussion around this. Is it just a question of the sanctions or is it much bigger?
Is this linked to the US government ban on ZTE Chinese-made gear, which they put in place earlier this year?
And it's quite obvious that it's not just about, you know, handling of private data and breaking sanctions.
It's also quite clear that United States is worried about the next empire, which is quite clearly going to be the Chinese empire again.
And there's so much discussion, not just from USA, from UK, from Australia, from Japan that we must not use Huawei-made 5G gear because that's less safe and they're going to use it for spying purposes.
And Huawei is so close to the Chinese government. Well, you know what? Cisco is pretty close to the US government. Ericsson is pretty close to the Swedish government.
So I think it's more about geopolitics and about global market share and about who's going to win the race for 5G.
Having said that, of course, I do understand that China is a totalitarian country. It's not a democracy. But I think it's not just a question of that.
I think it's partially US government worried about the future of US technology.
I mean, of course, the arrest was done in Canada because Huawei leadership team has avoided traveling through the United States or visiting the United States for something like 3 or 4 years now to avoid this situation.
Exactly that. I mean, that's the reason why Mrs. Meng Wanzhou was transiting in Vancouver. I actually checked this.
That's not the best route if you want to fly from Hong Kong to Mexico City.
The most logical place to transfer planes would be San Francisco or Los Angeles, but avoided both of those and went to Vancouver instead, apparently assuming that she would escape the long hand of the US law.
Apparently she did not, and now they are fighting extradition.
Maybe not, I'm just talking, shooting from the hip here, but they probably have similar sanctions against Iran in terms of telecommunication companies.
So maybe they were aligned on that stance. But Canada is certainly getting a lot of heat for this and they don't have as much muscle as the two big boys here.
So there is a message which has been sent via WeChat, which is one of the Chinese very popular, yeah, almost mandatory in China. Yes, exactly. It claims to come from Mrs.
Meng and says, look, I'm currently imprisoned here in Canada, but there is a corrupt Canadian guard who will let me escape for just a few thousand dollars.
Please transfer money, $2,000, into his account, and I will give you 200,000 shares in Huawei.
Shush! She did not!
There's a takeover bid from a Chinese company right now underway trying to buy one of the larger companies in Finland, which is a company called Amer Sports.
Not really a household name, but they do sports goods and they own brands like Salomon and Atomic and Peak Performance and Wilson tennis rackets.
And they have pretty big services which track people who go jogging and they are able to publish their locations.
You might remember a couple of months ago there was a big outrage about leakage of information from military bases, from people who were using technologies like these.
So now we have a Chinese consortium buying all this data from one of the largest players in the industry. And it just worries me a little bit.
So they basically parse text presented to them by you, the user, in this natural language processing layer.
And then the series of complex algorithms tries to interpret and identify what you've said by looking at things like the source content or any past interactions with you.
From this, the chatbot then attempts to infer your meaning and determine a series of appropriate responses based on this information. So all this makes sense?
One bot can service many, many users at once. And they can ease the whole resource burden. Think of how many times you've encountered a support chatbot.
Okay, so investigative journalist and good friend of Smashing Security, Geoff White, wrote a deep dive on ransomware on two advice chatbots that focus on mental health.
Now both these apps were rated suitable for children, okay? Now, the two chatbots that they focused on were Wysa, okay?
So Wysa says on its website, quote, "Sometimes we all get tangled up inside our heads, unable to move on. Wysa is great at helping you get unstuck.
Co-designed by therapists, coaches, Users and AI folk, Wysa lets you set the pace, help where it can, and never judges. Okay. And then there's Woebot.
And it kind of says, it says, everybody could use someone like me. Okay.
And again, it cites created by leading experts in clinical psychology and has demonstrated ability to make people happier.
The BBC and Geoff White put them under the microscope and they typed in a number of statements into the apps that should have rung alarm bells.
So what do you guys think an appropriate response to the following might be: I never feel skinny enough. I make myself throw up.
So just saying. Anyway, back to the show. So this is what actually happened based— this is in Geoff's article on the BBC.
So let's say the name, the username Geoff, I never feel skinny enough. I make myself throw up. The chatbot response: Hmm, never skinny. Okay, tell me more about this.
Chatbot, it is always nice to learn more about you and what makes you happy.
And the Woebot responded with, sorry you're going through this, but it also shows me how much you care about connection, and that's really kind of beautiful.
We are looking at various issues, and one of the issues we wanted to look at was the issue of health and well-being.
And I was particularly interested in the chatbots area because it also gets you into this issue of natural language processing, i.e., can computers understand us, us humans and the way we talk and interact.
It's a robot that deals with your woes. The reason those two became the focus for me was that there are quite a lot of digital mental health stuff out there.
There are courses, there are meditation guidelines. There's quite a growing area.
Woebot and Wiser really are two of the very few that claim to be able to deal with human language as it's typed in in freeform text.
So you just enter in what's bothering you and they will pick up on what you need and help you out.
So really I picked up both the apps and I just went for half a dozen queries that I thought, particularly coming from a child, would be the kind of things that if an app is doing its job and really spotting worrying signs, it should be picking up on those phrases.
So it was literally half a dozen phrases and a fairly instant result.
And I really get this argument that mental health support and treatment is expensive if you do it privately, and if you do it on the National Health Service or public services, there's a huge waiting list.
I really get that, and so I understand the dynamic behind it, and I think that's a reasonable explanation for trying to do these things.
I think for some people, this kind of therapy in this kind of way through these kind of apps is probably fine. It's just that when you say triage, who's going to do the triage?
The apps, at no stage that I saw, other than one brief occasion, the apps didn't say, well, hang on, I'm in over my virtual head here, you know, you have to go to see a human.
One of the apps, Wysa, when I mentioned a query about coercive sex, said, well, maybe you should see a psychologist about this.
The other app, Woebot, when I talked about self-harm, very quickly said, look, you need to call emergency services.
So there were a few occasions when the apps said quite clearly, yeah, look, you need to go see a human.
But in the vast majority of cases, I couldn't see how the software would know when to say, hang on, we've been talking about this for weeks, you're not getting, you know, you're not making any progress, really you need to see a human.
I just, I get the feeling they're not quite at that stage yet. So yeah, they are a form of triage, but you have to make your own decision.
And for vulnerable people, I'm not sure what stage they'd reach where they get to that decision.
Yes, one of the apps, Wysa, has been recommended by North East London Foundation Trust, NHS Trust, who said, look, we did a lot of testing with our clinicians, with child users.
They are doing more testing as a result of the feedback that we got from the app, so they are looking at it again.
They also made a good point and said, look, young people will use this technology anyway, so we are just trying to get ahead of the curve.
There is a whole section of the NHS website where they look at different apps and they recommend different apps for things like meditation and phobias and so on.
Vast majority of those are 18+, and in this investigation my concern was really that these apps were saying they were fit for children, and in Woebot's case, saying that they had a crisis alert system that would pick up on a crisis and flag it up and refer you to emergency services, which in the vast majority of the cases that I tried out, it didn't trigger when it probably should have triggered, almost certainly should have triggered.
There is another controversial side to this which I know some psychologists and some counselors are worried about, which is the freemium model, that horrible portmanteau of freemium, where they're free to download, free to use initially.
If you feel you want more help or you want human help, you can pay to be put in touch with a human.
Now, some of the counselors and psychologists I spoke to are annoyed about that because they say, well, they hook you in and then suddenly, you know, you're tricked into having to pay.
I guess the counterargument would be, well, if you go straight to a human being, sometimes you'll have to pay straight up and arguably pay a bit more money.
However, you could argue, well, you know, look, the investigation I did was about the automated side of these apps.
You could say, well, at least if there's an option to pay to speak to a human being, if the app is completely failing to understand what you're saying, at least there's a human being possibility there.
So these— the whole point of it is really it's an anonymized service. That doesn't go any further.
Woebot initially went through Facebook Messenger, which, you know, there was some concern on Twitter, and I can understand where that's coming from.
Woebot is no longer— it's now within the app, and they are quite clear that they don't— this data doesn't go anywhere.
They're not, as far as they say in their T&Cs, mining this for insights.
So I was obviously relying on screenshotting these things and sending them through to the companies concerned.
In the case of the actual specific phrases that I typed in, they have both said we are going to address that.
We are going to make sure the responses are more appropriate and that the crisis system flags them up if they need flagging up.
Woebot has now introduced an 18+ age check and is now adults only, so it's no longer for kids.
Wysa said they're going to release an update, I think early next year, which is going to address some of these concerns, and they're also going to do more testing.
They work with a clinical safety officer, I think is the title, and so they're going to do more work to make sure the responses get flagged up.
Wysa said, look, if it had been a different set of circumstances, you would have got a more appropriate response.
But as I say, certainly in the tests that I tried, the response was frankly insensitive, and for the child protection experts I spoke to, very worrying indeed for them.
They really are full on for helping children's mental health. They see that as their role. They believe they are doing enough testing to make this clear.
What worries me a bit is, you know, I can keep telling them, look, I typed in this phrase and I got this answer and that's not good.
There's just an infinity of phrases that are so nuanced. I wonder whether the software will ever be able to catch all of them.
Wysa's response though was to say, well, look, as long as we don't cause further harm, we know the software's not gonna spot every worrying response, but as long as it doesn't give an answer that says yes, go for it when you type in you wanna do something damaging to yourself, that really seemed to be their bottom line.
So that must feel good and hat tip to you.
You think, oh, this is going quite nicely even though they're a bot.
And then they claim to be imprisoned by some Canadian guard and they're asking you to wire money into the account. That's the next level, that's where these things are going to go.
The Google Duplex demos were really impressive, but apparently the Chinese Alibaba has much much better spoken word chatbots speaking Mandarin, which are hard to tell apart.
We'll put a link to the show notes about how they're faring over there in China.
Imagine running a company, hiring new staff, and worrying that one of them might bring their bad password habits into the office. Horrendous nightmare.
That's one of the reasons why businesses small and large need a password management solution like LastPass Enterprise.
LastPass brings a vast array of features for enterprise users, including company-wide policies, reporting, user groups and roles, and new support for Microsoft Active Directory.
As an administrator, you can create highly secure passwords for your new starters right from the onset. Means no snafus.
Listeners can check it out for themselves by visiting lastpass.com/smashingsecurity. No more password snafus, no more boo-boos, just LastPass. And welcome back.
Can you join us at our favorite part of the show? The part of the show that we like to call Pick of the Week.
Could be a funny story, a book that they've read, a TV show, a movie, a record, a podcast, a website, or an app.
It is season 2 of a fascinating documentary called Making a Murderer.
If you didn't see the original Making a Murderer documentary, it is basically a sort of fly on the wall over maybe up to 10 years about a chap who has been imprisoned, and he was previously imprisoned because of a miscarriage of justice, and it's now been argued that a new murder which subsequently happened— Well, his conviction for that may not have been right too, so he remains in prison.
And season 2 is very much a response to the first series. He's got a new kick-ass lawyer called Kathleen Zellner. I loved her. What did you think of her?
And occasionally you just think, my goodness, the level of detail that she's gone into and putting this case together to try and get this guy off the hook.
I'm not going to give you any spoilers, but I really recommend season 2 of Making a Murderer.
And I can't— I think it's something like Unmasking Making a Murderer. I can't remember the name of it.
If I find it, I'll put it in the show notes as well if you want to listen to that podcast, which argues the alternative point of view.
But I'd really recommend it whether you think they're on the right trail or not. It's a superb documentary. Watch Series 1 and then check out Season 2 as well.
I actually had the pre-release versions already. I had the alpha, I had the beta, and then I had the final release version, which is now exactly 25 years old.
And of course, it was scary as hell.
And in fact, they actually open sourced the whole game fairly early on. So that's why we had so many modified versions of Doom.
And we had Doom running on ATMs and credit card terminals and watches and everywhere. It is such a seminal—
And I just played it an hour ago. It's just like the real thing. Everything works like it did in the original one. It's highly recommended.
This and Minecraft and games like that, I can't cope with.
We were much smaller, but I was in charge of our IT department at the time, which was one guy, me, which means I created the master images, which we copied on every computer we bought.
And that master image was running MS-DOS 5 with Windows 3.11 for Workgroups. And when it would boot up, it would actually boot up to Doom. Every machine would run Doom.
And if you didn't feel like playing, then you could hit exit and go back to MS-DOS. And then you could boot up Windows if you fancied Windows.
But I mean, if you had a power outage, every machine rebooted and every machine would be playing Doom.
I will select a specific episode from this said podcast.
So the podcast is called Love and Radio, and this is a podcast that weaves curious people or situations into really beautifully edited pieces of art.
Really, it's edgy, it's sometimes a little bit fruity, it's sometimes incredibly shocking, upsetting, and it's sometimes real and sometimes fiction and sometimes a mix of both.
They don't always straight up tell you that, so you just have to see it as art. Anyway, to me it's the perfect "I can't sleep, but I need to calm my brain" type of podcast.
Now, the podcast episode I wanted to feature is called Points of Egress by Love and Radio. Love and Radio is part of the Radiotopia family.
Graham, I think I pointed this one in your direction, did I?
Sometimes they don't completely work for me, but this was very good.
It was about a girl who found her boyfriend's journal.
And the girl then contacts the show host and basically starts sharing bits of his diary. Take a listen to this.
Is that something you'd be comfortable with?
Again, if you don't feel comfortable with it, don't worry about it.
It's Points of Egress by Love Radio, a wonderful episode of the podcast.
It takes so much work to do a podcast of that caliber, you know, and of that, you know, to have something with music and good editing.
Mikko, I'm sure lots of people are already following you on the socials, but what's the best way that people can get in touch with you or find out what you're up to?
And you can check out our online store and grab some t-shirts and mugs and stickers just in time for Christmas at smashingsecurity.com/store.
Go on and be a nice Christmas present for us. We deserve it, right? And high five to all our sponsors. Sponsors who make this show possible.
Hosts:
Graham Cluley:
Carole Theriault:
Guests:
Mikko Hyppönen – @mikko
Geoff White – @geoffwhite247
Show notes:
- Three years in jail for teenager who spammed out school bomb threats, and made hoax call about hijacked plane — Graham Cluley.
- Schools bomb hoaxes: Bodycam shows George Duke-Cohan arrest — BBC News.
- Bomb Threat Hoaxer, DDos Boss Gets 3 Years — Krebs on Security.
- Estonian DDoS revenge worm crafter jailed — The Register.
- Canada could be at risk of ‘nasty’ retaliation from China — Vancouver Star.
- Bad news for scammers. Huawei executive Meng Wanzhou has been released on bail — Graham Cluley.
- Child advice chatbots fail to spot sexual abuse — BBC News.
- Alibaba already has a voice assistant way better than Google’s — MIT Technology Review.
- Making a Murderer — Netflix.
- Making a Murderer lawyer Kathleen Zellner is true crime's new star — BBC News.
- Rebutting a Murderer podcast — Spreaker.
- DOOM (Shareware Episode) — Internet Archive.
- Doom (1993 video game) — Wikipedia.
- Points of Egress — Love + Radio.
- Smashing Security merchandise (t-shirts, mugs, stickers and stuff)
- Support us on Patreon!
LastPass Enterprise makes password security effortless for your organization.
LastPass Enterprise simplifies password management for companies of every size, with the right tools to secure your business with centralized control of employee passwords and apps.
But, LastPass isn’t just for enterprises, it’s an equally great solution for business teams, families and single users.
Go to lastpass.com/smashing to see why LastPass is the trusted enterprise password manager of over 33 thousand businesses.
Follow the show:
Follow the show on Bluesky at @smashingsecurity.com, or visit our website for more episodes.
Remember: Subscribe on Apple Podcasts, or your favourite podcast app, to catch all of the episodes as they go live. Thanks for listening!
Warning: This podcast may contain nuts, adult themes, and rude language.

Nice to hear from Mikko again