Font Size: A A A

Home > Security Products > Cyber > Data in an AI system

Cyber

Data in an AI system

Joe Michael, Solutions Architect at enterprise AI product firm IPsoft, covers the key considerations for businesses when it comes to securing customer data with AI.

Artificial Intelligence (AI)-powered solutions are completely upending customer expectations for their interactions with digital systems. AI systems provide automated 24/7 omni-channel access to information and services, enabling enterprises to expand offerings and provide a more compelling user experience, without increasing overhead expenses. However, as businesses invest in cognitive AI technologies to improve both customer and employee satisfaction, they must bear in mind that these innovations are worthless if users don’t trust that their data will be properly secured. Case in point: See the demise of SaaS provider Code Spaces, which went out of business following a hack.

Enterprises must work around the clock to keep their networks as secure as possible, but many IT and security teams are unfamiliar with securing this nascent technology. Here are four things enterprises must keep in mind for securing customer and employee data with AI.

Data at rest is considered to be inherently more secure than data in transit. Keeping this mind, an AI solution deployed entirely on-premises may require fewer data transits than one deployed in the public cloud, however this sacrifices secure backups and other redundancies enabled by the cloud. Still, if proper security measures are undertaken by both cloud provider and customer, then implementing on the cloud is generally considered a safe endeavour.

Regardless of where the solution lives, all systems should encrypt data to ensure that information isn’t readily accessible by unauthorised parties in the event of a systems breach. One important trend in cryptography for AI systems is the use of differential privacy, which aims to protect large data sets by clouding the information with a level of impenetrable noise. This successfully masks individual users’ data, but still makes datasets easily accessible to machine learning algorithms.

One of the most interesting and increasingly-popular applications of AI is in the conversational realm. These solutions allow customers to simply speak to a digital system just as they would to a human. There is a lot of potential for transforming the user experience (UX), but it is crucial to remember that with this ease of digital access comes new considerations for authentication.

Conversational AI technologies now allow users to access their banking information directly through their Amazon Echo. Yet, few users would want this information accessible to any visitor in their home asking a simple question like, “Alexa, how much money is in my current account?”. Conversational AI needs to be accompanied by the same strong access barriers one would encounter through a Web or mobile interface. This virtual barrier could include complex passwords, biometrics or multi-factor authentication – the more sensitive the information, the more secure the barrier should be.

AI-powered systems, and conversational systems in particular, are relatively new technologies, meaning that, as with any new technology, there are bound to be security concerns for which we are not yet aware. This is why decision makers must keep up on the latest InfoSec (Information Security) trends.

For example, voice authentication had previously been considered a reliable biometric that would only permit access based on a voice pattern unique to each person. This would appear to be an ideal solution for the conversational AI age. However, last year a BBC reporter was able to fool a major bank’s voice authentication software by using his non-identical twin brother as a stand-in. And perhaps more alarmingly, researchers recently showed the ability to fool voice authenticators using mimicking software. One of those researchers stated that voice authentication should be treated as “a weak signal on top of multi-factor authentication” – it should never be deployed as a standalone level of protection.

In our digital age, customers and organisations alike are demanding increasingly intelligent and intuitive functions. But as enterprises rush to stay ahead of customer expectations by implementing AI solutions, they should always be mindful of staying ahead of the hackers, phishers and bad guys.

These key security considerations are critical for any organisation exploring and deploying AI systems, because no matter what new innovations are proposed, consumers won’t readily forgive a data breach.

See also the IPSoft blog.


Tags

Related News