Interviews

Terrorist online content detection

by Mark Rowe

The Home Office reports developing new technology to automatically detect terrorist content on any online platforms. The Home Office and ASI Data Science use machine learning to analyse the audio and visuals of a video to determine whether it could be Daesh propaganda. It claims in tests the new tool can automatically detect 94pc of Daesh propaganda with 99.995pc accuracy.

The Home Office says that it and ASI will be sharing the methodology behind the new model with smaller companies. As the Government says, many of the major tech companies have developed technology specific to their own platforms. Smaller platforms, however, are increasingly targeted by Daesh and its supporters.

The model, which has been trained using over 1,000 Daesh videos, is not specific to one platform so can be used to support the detection of terrorist propaganda across a range of video-streaming and download sites in real-time.

Home Secretary Amber Rudd, pictured, said: “Over the last year we have been engaging with internet companies to make sure that their platforms are not being abused by terrorists and their supporters. I have been impressed with their work so far following the launch of the Global Internet Forum to Counter-Terrorism, although there is still more to do, and I hope this new technology the Home Office has helped develop can support others to go further and faster. The purpose of these videos is to incite violence in our communities, recruit people to their cause, and attempt to spread fear in our society. We know that automatic technology like this can heavily disrupt the terrorists’ actions, as well as prevent people from ever being exposed to these horrific images. This Government has been taking the lead worldwide in making sure that vile terrorist content is stamped out.

Meanwhile on a two-day visit to San Francisco, the Home Secretary will meet US Secretary of Homeland Security Kirstjen Nielsen.

Comments

Bill Evans, Senior Director at One Identity, an IAM (Identity and Access Management) product company, said: “While at the surface, this seems like a wonderful idea – to protect the homeland from the insidious content being distributed via cyberspace by our enemies in the war on terror – but it is indeed a slippery slope. To be sure, there is evidence that this technology has proven effective; the tool can automatically detect 94pc of Daesh propaganda with 99.995pc accuracy. However, there are risks, longer-term risks, to using this type of technology that can “automagically” find and remove “terrorist” content.

“The most-pressing risk relates to how this new technology could be used in the future. For example, most ordinary people today would likely agree on what is “terrorist content.” However, if the government were to mandate the use of this technology, we would be at the mercy of what would be tantamount to censors who would determine where that line actually lives. Ads from rival political parties? Information about views that are not congruent with a certain set of beliefs? These could all end up being removed from view by this government-mandated tool. Likely? Probably not, but possible.

“It’s best for the government to work with the private sector to enhance the efficacy of this type of technology and then encourage the various internet platform vendors to use it. In a political system which involves free speech as a basic right of the people, that’s a much better deployment scenario than evoking the “thought police.””

Related News

Newsletter

Subscribe to our weekly newsletter to stay on top of security news and events.

© 2024 Professional Security Magazine. All rights reserved.

Website by MSEC Marketing