Dr Ian Levy, technical director of the UK’s National Cyber Security Centre, has criticised security companies for “massively” exaggerating hackers’ abilities in order to scare businesses into making purchases.
As BBC News reports, Levy criticised the hyperbolic imagery and language used by security firms when describing threats:
Playing up the threats let security firms establish themselves as the only ones that could defeat hackers with hardware that he likened to a “magic amulet”.
“It’s medieval witchcraft – it’s genuinely medieval witchcraft,” said Dr Levy.
Often, he added, the attacks aimed at firms were not very sophisticated. As an example, he quoted an attack last year on a UK telecommunications firm that used a technique older than the teenager believed to be responsible for the incident.
The telecoms firm being referred to there is TalkTalk which, despite its attempts to convince customers that it had fallen foul of a highly sophisticated attack, had fallen foul of a bog standard SQL injection attack – the type that any decent web programmer learns about on the first day of their secure coding course.
You don’t need to be a state-sponsored hacker to perpetrate an SQL injection attack. Just about any teenager can manage it from their back bedroom with ease.
Similarly, the high profile hack of email accounts belonging to senior figures in the US Democratic party don’t appear to have been that sophisticated either – relying instead a fairly rudimentary phishing email assisted by a victim’s poor password hygiene, and a lack of multi-factor authentication.
Cybercriminals are often not geniuses for a very good reason. They don’t need to be. We make it too easy for them to succeed.
And often it will be a human failing which gives the malicious hacker the opportunity they need to break in and infect computers or steal information.
By the way, Levy seems quite refreshing in his approach, as The Register‘s write-up of his speech makes clear:
In November, the agency published its National Cyber Security Strategy 2016 to 2021 detailing these plans, and Levy suggested people take a read because “for a government strategy review it’s not completely crap.” The NCSC wants to promote “active security” – not active as in attacking but active as in “getting off your arse and doing something.”
You can read the UK Government’s Cyber Security Strategy here.
Dr Levy knows that there have been sophisticated state-sponsored attacks against companies that are critical to UK Defence. Talk Talk was not one of those but they tried to use APT as a excuse. Many British companies (and probably government institutions) have poor IT security and do not adequately protect their information (including customer information).
You know it, I know it, and the focus should be getting better, which is hopefully what the UK Gov CSS will achieve.
Yes, you're right of course. There are sophisticated attacks, and many of them may have the backing of foreign states (and sometimes even intelligence agencies closer to home).
Although they should be taken seriously, for most businesses they aren't the threat that they are most likely to encounter. If we can deal better with the mainstream attacks, maybe we'll have a better chance of devoting decent resources into tackling these edge case more sophisticated threats.
Meanwhile, lets hope that companies stop using the "highly sophisticated attack" description to disguise their own embarrassing failures.
I agree with this wholeheartedly, but laughed out loud when I read "the type that any decent web programmer learns about on the first day of their secure coding course." One of our biggest problems in security (in my opinion) is that developers are not trained in security when in school and most companies are not yet convinced to spend the money on training their developers on secure code development. We security professionals must convince our executive teams make the investment in secure code development training as a minimum requirement for web and mobile app developers. Otherwise, we'll just continue to see far more money spent on breach response and bug bounty payouts. This is an area close to my heart; I'm a champion of such training as a VP with NotSoSecure, a boutique pentesting and hacker training company. Feel free to reach out of you see an opportunity to collaborate: debra at notsosecure.com
I work in network security and have done for twenty years. I did not realise that telling a company that a (well configured and managed) web application firewall could prevent loss of confidential data by identifying and blocking common attacks such as SQL injections was exaggerating hacker's abilities and thus part of the problem.
Clearly data breaches, such as that observed at TalkTalk, have been caused by security companies using fear-uncertainty-doubt techniques and UK business buying into it and not, as previously thought, organisations failing to spend sufficient resource on protecting customer data because they have yet to fall victim to an attack, for example.
*tsk*
I have heard a lot about the incidence of FUD techniques from security companies – but is it really that common – or indeed successful? Where's the evidence? Almost all network and security decision-makers I have spoken to over twenty years have been more than capable of ignoring supplier hyperbole. They have more difficulty justifying spending a lot of money when the company has not felt financial pain from a successful attack. They cannot convince the board – or decide to run the risk themselves.
Data breaches seem to stem from a risk assessment and budget/resource allocation failure more often than a security firm selectively using stats to big-up their solution.
I think Dr. Levy's hyperbolic comments only serve to fuel the fear of dealing with sales people and that can only harm UK network security as it places an additional barrier between organisations and people who know something about how to address threats. Now who's using FUD?
The more sophisticated the attack, the easier it is to attribute to someone or something. There was a presentation on this at RSA last year. ( https://www.vircom.com/blog/6-things-i-learned-at-rsa-2016-in-san-francisco/ )
Raising awareness of cyber-security is no easy task. Security products are fundamentally 'disabling' technologies, in the sense that they 'stop or slow down bad stuff' and you don't really see them working until they actually stop working and something goes wrong.
Humans of course over-estimate their luck and assume security by obscurity will be adequate. We tend to be incredibly reactive, and only act when there's an actual failure, breach or intrusion. All the warnings and threats in the world won't make someone act, there have been several articles on 'security fatigue', where users are blasé and inured to all the breach information out there and aren't even listening any more. Ironically, many security experts are inured in another sense, considering that prevention is futile and the only hope is detection, containment and recovery.
So then what is the solution to the dilemma? Probably what it has always been in similar contexts: education, education, education. Cybersecurity knowledge is best built and achieved by informing customers and the public about the benefits and progressively training on tips, tricks, actions, habits and processes that keep people protected. It's the long game, scaring them simply doesn't work.
Just a couple of thoughts here.
When looking at recent successful attacks, I have to think of the annual DBIR from Verizon which has been saying the same things for years now, basically. After all, many incidents boil down to one stupid mistake (or a series of mistakes) that you shouldn't even have made in the first place – think SQLi, Phishing, Social Engineering, the lot.
The term "sophisticated" is often thrown around too easily, which kind of invalidates the term. It now becomes an alternative term for "we were unable to detect it earlier through the means at our disposal". The fact that something might remain undetected for a period of time is by no means an indicator of its actual level of "sophistication" (or lack thereof).
Many organizations also tend to worry about threats they'll probably never have to face, like being targeted with 0day-based attacks. There is a quote which I think fits nicely here:
"The war you prepare for is rarely the war you get."