Interviews

AI misuse report

by Mark Rowe

Could a ‘service robot’ be hacked from afar to carry out an attack indoors – undeterred by smoke or darkness? Might a terrorist repurpose a commercial AI system, such as a cleaning robot, to carry out an assassination? Could cyber criminals use AI techniques to automate various tasks, such as dialogue with victims of ransomware? Might AI be used for social engineering – using victims’ online information to automatically generate custom malicious websites, emails or links they would be likely to click on, sent from addresses that impersonate their real contacts, using a writing style that mimics those contacts? How about fake news – highly realistic videos are made of state leaders seeming to make inflammatory comments they never actually made? These are among the scenarios in a report on how AI could be misused; and what how we might counter such misuses.

Artificial intelligence (AI) could be used by rogue states, criminals, and terrorists, says a report by academics and researchers in the field. The 101-page downloadable report – The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation – points to three ‘domains’: digital, physical and political security. To take them in order:

The use of AI to automate tasks involved in carrying out cyberattacks will alleviate the tradeoff between the scale and efficacy of attacks. This may expand the threat associated with labour-intensive cyberattacks (such as spear phishing). The report’s UK and US authors also expect novel attacks that exploit human vulnerabilities (such as the use of speech synthesis for impersonation), software vulnerabilities (through automated hacking), or the vulnerabilities
of AI systems (‘data poisoning’, for example).

In the physical world, we could see the use of AI to automate tasks involved in carrying out attacks with drones and other physical systems (such as through the deployment of autonomous weapons systems). Also we can expect novel attacks that subvert cyber-physical systems (causing autonomous vehicles to crash, for example) or involve physical systems that it would be infeasible to direct remotely (such as a swarm of thousands of micro-drones).

As for politics use of AI could automate tasks involved in surveillance (analysing mass-collected data), persuasion (targeted propaganda), and deception (manipulating videos). That may mean threats of privacy invasion and social manipulation. AI may mean an improved capacity to analyse human behaviours, moods, and beliefs on the basis of data. That may be of use to authoritarian states, but may also undermine the ability of democracies to sustain truthful public debates.

Comments

Miles Brundage, Research Fellow at Oxford University’s Future of Humanity Institute, said: “AI will alter the landscape of risk for citizens, organisations and states – whether it’s criminals training machines to hack or ‘phish’ at human levels of performance or privacy-eliminating surveillance, profiling and repression – the full range of impacts on security is vast.

“It is often the case that AI systems don’t merely reach human levels of performance but significantly surpass it. It is troubling, but necessary, to consider the implications of superhuman hacking, surveillance, persuasion, and physical target identification, as well as AI capabilities that are subhuman but nevertheless much more scalable than human labour.”

Dr Seán Ó hÉigeartaigh, Executive Director of Cambridge University’s Centre for the Study of Existential Risk, called AI a ‘game changer’: “For many decades hype outstripped fact in terms of AI and machine learning. No longer. This report looks at the practices that just don’t work anymore – and suggests broad approaches that might help: for example, how to design software and hardware to make it less hackable – and what type of laws and international regulations might work in tandem with this.”

Visit www.maliciousaireport.com. The AI report is also featured in the University of Cambridge research magazine, Research Horizons, as part of a spotlight on AI research: download a pdf.

Related News

Newsletter

Subscribe to our weekly newsletter to stay on top of security news and events.

© 2024 Professional Security Magazine. All rights reserved.

Website by MSEC Marketing