As BBC News reports, British Home Secretary Amber Rudd has announced a tool that is said to be able to detect and block extremist online content, such as Islamic State propaganda videos.
London-based ASI Data Science has received £600,000 of taxpayer’s money to develop the software, which was trained with thousands of hours worth of video content posted by IS.
ASI Data Science says that its software can detect 94% of IS videos.
Of course, you can’t just focus on what percentage of extremist videos are detected. That number is meaningless without also weighing up how many videos are mistakenly identified as extremist.
Let me put it this way. It’s easy to write a detection routine that picks up all past, present, and future extremist videos – 100%! The only problem is that it would also detect everything that wasn’t an extremist video too.
So, let’s hear what the false alarm rate is:
ASI Data Science said the software can be configured to detect 94% of IS video uploads.
Anything the software identifies as potential IS material would be flagged up for a human decision to be taken.
The company said it typically flagged 0.005% of non-IS video uploads. On a site with five million daily uploads, it would flag 250 non-IS videos for review.
So, the important question is just how many extremist IS videos are uploaded each day to that site with five million daily uploads. If it’s a relatively small number compared to the number of false alarms, then you’ve got a problem.
And that’s before you consider the 6% of extremist videos that slip past the detection routine, or that anyone wanting to share this kind of stuff might very well put it on the dark web instead (which really isn’t hard to access, despite what you may read in some reports).
Hundreds of innocent journalists and others will have their videos mis-categorised every day, on that one video site alone, and unless enough people are thrown at the problem of verifying whether content is legitimate or not, there will inevitably be a long backlog of video content waiting to be approved, and accusations of censorship.
That’s man power that is unlikely to be available to the smaller technology companies that the UK Government appears to be targeting with the software, as it acknowledges that the likes of Google, Facebook and other internet giants are already trying to weed out extremist content.
Furthermore, I didn’t see any information shared about how long it took the software to identify whether a video might be extremist or not. That’s something that definitely would be of interest.
But hey, what do I know? I’m just a “sneering” and “patronising” technology expert.
Amber Rudd told BBC News that the government would not rule out using legislation to force companies to adopt technology like that produced by ASI Data Science.
Found this article interesting? Follow Graham Cluley on Twitter or Mastodon to read more of the exclusive content we post.
One comment on “UK Government announces tool to detect and block extremist videos”
We already KNOW that computers are good at some types of automated image matching. It's embarrassing for UK tech industry that Amber Rudd feels the need to go tell Silicon Valley, like it's some kind of amazing UK Govt-sponsored breakthrough. Pigeons and sheep are also good at image search, maybe Amber could have brought some off those with her, she might even understand birds and animals more than she does encryption technologies …