Vertical Markets

AI and national security

by Mark Rowe

AI, like so many technologies, offers great promise for society, prosperity and security. Its impact on GCHQ is equally profound, says GCHQ Director Jeremy Fleming. The occasion is the publishing by the Government listening station at Cheltenham of a paper; Ethics of AI: Pioneering a New National Security.

Jeremy Fleming says: “AI is already invaluable in many of our missions as we protect the country, its people and way of life. It allows our brilliant analysts to manage vast volumes of complex data and improves decision-making in the face of increasingly complex threats – from protecting children to improving cyber security.

“While this unprecedented technological evolution comes with great opportunity, it also poses significant ethical challenges for all of society, including GCHQ.

“Today we are setting out our plan and commitment to the ethical use of AI in our mission. I hope it will inspire further thinking at home and abroad about how we can ensure fairness, transparency and accountability to underpin the use of AI.”

Among examples of how the signals intelligence agency could use the technology, include:

– Fact-checking and detecting deepfake media to tackle foreign state disinformation;
– Mapping international networks that enable human, drugs and weapons trafficking;
– Analysing chat rooms for evidence of grooming towards child sexual abuse; and
– How the UK official National Cyber Security Centre (NCSC) could analyse activity at scale to identify malicious software.

You can view the 21-page paper on the GCHQ website.

As the paper says, Artificial Intelligence – a form of software that can learn to solve problems at a scale and speed impossible for humans – is already transforming sectors; healthcare, telecommunications, and manufacturing. AI software informs satnavs, guides internet searches, and protects an electronic purchase, or an app opening on a smartphone. The paper sets out GCHQ’s AI and Data Ethics Framework, and how it intends to use AI in operations.

Comment

Adam Enterkin, SVP, EMEA at the cyber firm BlackBerry says: “Even the best cybersecurity teams have had major challenges this last year. There has been a sharp rise in both the number of cyberattacks against organisations and the sophistication levels of these threat actors, as shown in our latest 2021 Threat Report. In addition, the spread of disinformation has become a significant problem, particularly with regards to the pandemic. Threat actors possess the tools once thought to be solely the domain of nation-state attackers – allowing them to carry out more frequent, skilful and targeted attacks. As such, the danger for companies and governments is at an all-time high.

“Under these circumstances, both private and national security groups such as GCHQ are right to turn to new technologies. AI can help manage the volume of potential threats, spotting anomalies in data and dealing with menial and repetitive tasks whilst flagging potentially serious situations to cyber security team.

“Security teams are exhausted. Human error will happen. But with AI automating repetitive tasks, this risk is vastly reduced. Humans and tech must work hand in hand, so the professionals are equipped with the right knowledge and skillsets to keep our enterprises, and our country, safe.”

Related News

  • Vertical Markets

    Driving fines

    by Mark Rowe

    On-the-spot penalties for careless driving have come into force, including tailgating and poor lane discipline, such as keeping unnecessarily to the middle…

  • Vertical Markets

    Aviation mission

    by msecadm4921

    Lasse Rosenkrands Christensen was on August 10 appointed Head of Mission for the European Union aviation security mission in South Sudan (EUAVSEC…

Newsletter

Subscribe to our weekly newsletter to stay on top of security news and events.

© 2024 Professional Security Magazine. All rights reserved.

Website by MSEC Marketing