Cyber

AI-driven systems

by Mark Rowe

Ensuring integrity of data is key to a secure AI-driven enterprise, writes Rajesh Ganesan, pictured, Vice President of IT management software company ManageEngine.

If Artificial Intelligence (AI) hasn’t formed part of the conversation in your enterprise yet, then you are behind the curve. According to IDC, annual spending on cognitive and AI systems is set to reach $77.6 billion in 2022. On a geographic basis, the US will deliver more than 60 per cent of all spending on cognitive/AI systems, led by the retail and banking industries. Western Europe will be the second largest region, also led by banking and retail.

Estimates claim AI and automation could potentially generate an additional £630 billion for the British economy, according to Accenture, with PwC also claiming that the country’s GDP will be 10.3pc higher in 2030 as a result of the developing technology. Some sectors will see wholesale business model transformation while others will find localised opportunities to deploy machine learning.

While AI technology is transforming business and industry at a rapid clip, organisations looking to leverage it must ensure the integrity of the data which powers their systems by addressing three key issues.

Safe collection and storage of data

Data and data analytics are at the heart of AI projects and the integrity and security of the customer and company information used to inform AI systems is critical to their effectiveness. Successful exploitation of AI’s enormous potential requires the presence of systems which can collect and store significant volumes of data safely. Business data and AI models may contain sensitive information, including personal data belonging to customers and employees. That makes them a hot target for hackers and cyber-criminals. Implementing multiple layers of security, including encryption, particularly while data is in use within AI models, may be necessary to ensure it’s not stolen or compromised.

Preventing concept drift

AI models are only as good as the data that powers them. If operational data changes significantly, the models which rely upon it can be rendered irrelevant. Predictions or actions based on these models may result in security issues, such as data theft, exposure or deletion. Concept drift can represent a form of adversarial action, if it’s engineered by an individual or organisation that deliberately feeds inaccurate or misleading data to business applications. AI systems’ intelligence does not extend to being able to ascertain whether the data they are using is right or wrong. Feeding an AI solution erroneous information over an extended period of time can result in its ‘learning’ the wrong methods, models and rules.

Stamping out human bias

It is reasonable to assume that the predictions and responses generated by an AI powered system would be impartial and non-biased. However, that’s not necessarily the case. Human bias can be engineered into AI systems, if skewed or non-inclusive data is fed into the business applications upon which AI models are based. Once this has been accomplished, attackers may be able to control the functioning of the business application to generate non-objective outcomes in their own favour, or that of their allies.

Why explainable AI is a powerful protection measure

The implementation of AI-driven systems is set to have a significant impact in UK businesses with 43pc of respondents in ManageEngine’s State of IT in the UK 2019 Report saying AI will play the biggest role in their business operations in the next 12 months. Experts warn that those who don’t implement AI will find themselves struggling to compete.

Making security integral to AI initiatives means companies can enjoy the advantages without opening their systems and operations up to additional risk in the process. Investing in ‘explainable AI’ is one way in which organisations can protect their systems from the perils of infiltration, exploitation and the introduction of erroneous information.

The facility for AI systems to explain the rationale for their predictions and actions before they are put into play, explainable AI gives individuals the opportunity to counter factors such as introduced bias, before the enterprise is damaged or harmed as a result of them. Organisations which invest in this capability may be better placed to enjoy the significant benefits AI-driven systems can deliver.

Related News

Newsletter

Subscribe to our weekly newsletter to stay on top of security news and events.

© 2024 Professional Security Magazine. All rights reserved.

Website by MSEC Marketing