Interviews

AI cybercrime is coming

by Mark Rowe

Cybercrime, meet AI, writes Terry Ray, SVP and Fellow, at the cyber product firm Imperva.

One of the eternal truisms about cybersecurity is that it’s a cat and mouse game – and cybersecurity often seems to be behind the ball. Now, however, with artificial intelligence (AI) – essentially advanced analytical models – coming onto the market, cybersecurity actually has the edge. At present, vendors are doing far more than hackers with AI. Not that we can expect it to stay that way forever, but right now the good guys have the upper hand – and that gives the industry some time to prepare itself for the eventual rise of AI-enabled cybercriminals.

The value of AI in this model is that it lets companies take large volumes of information and find clusters of similarity. This is always the focus of cybersecurity to a degree, but organisations are often unequipped to do so in sufficient depth because of time and resourcing constraints. By contrast, AI can whittle down vast quantities of seemingly unrelated data into a few actionable incidents or outputs at speed, giving companies the ability to quickly pick out potential threats in a huge haystack.

The ability to quickly turn large amounts of data into actionable insights is something that cybersecurity teams are going to need in the coming years, because AI could become a formidable enemy. Unlike malware, which is purely automated, AI is beginning to mimic humans to a worryingly accurate degree. It can draw pictures, age photographs of people, write well enough to persuade people of truths – or lies.

This means that AI could theoretically industrialise the reproduction of human hacking tactics, which are currently the most damaging but also the most time-consuming form of attack for hackers. The best, most difficult hacks to detect are those performed by humans – digging into systems, watching user behaviour and finding or installing backdoors. Attacks performed with tools are much easier to detect. They bang around, they hit things, they find the backdoor by knocking on every wall. The sneaky thief is harder to find.

Hackers aren’t yet creating ‘AI-driven sneaky thieves’, but they could. AI could be used to build an independent, patient, intelligent and targeted attacker that waits and watches: an automated APT, if you will. That would be far more difficult to defend against than automated ’splash’ tactics, and it could be executed or industrialised on a very large scale.

How an AI cybercriminal could work

The good news is that any such automated APTs will arrive slowly, because AI is complicated. An AI algorithm isn’t usually designed to be user friendly. Instead of pointing and clicking, you have to customise the hacking tool to a degree that needs AI expertise. Those skills are in short supply in the industry, let alone the hackersphere, so we’re likely to see this achieved first by nation-states, not by hobbyists – which means that the first likely targets are those with national interest.

Let’s look at some public examples. A while ago there were hacks on Anthem, Primera and Care First, major healthcare providers in the US, all of which worked with a lot of federal employees. At the same time, Lockheed and the Office of Personnel Management, which handles Class 5 security clearance, were hacked, losing fingerprint and personal data for thousands of people.

One theory about these hacks was that a nation state stole the data. As it didn’t turn up on the dark web for sale, where did it end up? If this nation does now possess it, they have terabytes of healthcare, HR, federal background check and contractor data at their command. The value of such data would make relating one set of data to another very difficult and time consuming if done by hand.

But an AI program could find clusters and patterns in the data set and use them to work out who could be a good target for a future attack. You could connect their families, their health problems, their usernames, their federal projects – there are lots of ways to use that information. Nation states steal data for a reason – they want to achieve something. So as AI matures, we could see far more highly-targeted attacks taking place.

AI phishing

While it’s likely that AI-powered hacking will begin its life as the preserve of nation-states, it’s only a matter of time before this sort of attack becomes commonplace in the regular market. Let’s consider phishing as a case study for how this might look.

At the moment, its often easy to tell if an email is a phishing attempt from the way it’s written with misspelled words and odd grammar. AI could eliminate that ability. Let’s say that AI can write better than 60% of people, using colloquialisms and idiomatic phrasing – it’d be pretty hard to spot. And even if AI is only ‘as good’ as humans, it can be much faster and therefore more effective.

Phishing is one of the most lucrative forms of hacking – if AI can raise the rate of success from 12% to 15%, say, with half the human effort, then it could be worth it for hackers. We haven’t seen any truly malicious, AI-crafted spearfishing attempts yet, but it’s likely to be a very effective first step for AI cybercrime.

Crafting a defence

An effective defence comes down to having the right people and the right tools in place. It’s been several years now that organisations have been working to solve the information-overload problem in cyber security, yet still most security teams still have difficulty weeding out data theft incidents from the chaff.

Organisations have realised that the collection of user and application access to data is a responsibility of cyber security. Now security is feeling the pain of trying to understand this vast data. The most successful teams are leveraging AI or machine learning to perform these analysis activities to meet both the organisation’s and any regulation needs.

The recommendation for companies is always ‘don’t try to boil the ocean’. You’re not going to prevent every attack. Your focus should be on discovering where your critical resources are and what you can do to mitigate the risk on those resources specifically. If data is your most critical resource, what do you know about it?

Perimeter defences aren’t the best investment if you don’t also have visibility into your databases and files or appropriate security for your key apps. Where is the most risk and the most value for the hacker? It’s not at the perimeter – it’s in your databases. Are you able to tell who’s accessed your data, how they accessed it, what they accessed and whether they should have taken it?

If you’re not watching your data, you’re not protecting the only thing that will realistically be stolen from your data centres.

GI Joe used to say that ‘knowing is half the battle’. But these days it’s the whole battle. Telling a regulator that you don’t know what was taken post-breach can cost hundreds of millions these days. AI cybercrime is coming – make sure you can protect your valuable data.

Related News

Newsletter

Subscribe to our weekly newsletter to stay on top of security news and events.

© 2024 Professional Security Magazine. All rights reserved.

Website by MSEC Marketing