Interviews

Against hackers with AI

by Mark Rowe

The spiralling growth in hacking is costing business ‘big time’, writes Colin Tankard, pictured, Managing Director, Digital Pathways.

A recent report for the ninth Annual Cost of Cybercrime Study, found the value of risk for an average Times 1000 company, with 2019 revenues of £15 billion, equals 2.8 per cent of revenues per year and, even at this level, weaknesses remain. The cost of being hacked can run into many hundreds of thousands of pounds, coupled with the loss of business, market share and reputational damage.

The situation is not helped by the lack of cyber security talent. It is well-known that while the creativity and critical thinking of human cyber security expertise is key to gaining an advantage over cyber attackers, finding the skills is hard. The New York Times reported that the industry expects 3.5m cyber positions to be unfilled by the end of 2021. Even with the supposed hundreds of thousands of security experts available via bug bounty platforms across the world, the reality is that the pool of top talent is quite narrow.

An answer to this crisis is augmented intelligence (AI) or augmenting human security talent with artificial intelligence. New data from Synack, a trusted crowd-sourced security testing platform, revealed that organisations are successfully ramping up their ability to scale their security in 2020, to prevent data breaches. Already, they are covering more ground, getting 20 times more effective in their ability to cover their digital attack surface than traditional testing methods, by using AI alongside their cyber talented staff.

The Synack report identified three key areas that are having a significant impact on threat detections:

– Coverage and Scale: While humans are around two times more impactful at finding breach-worthy security vulnerabilities, an augmented combination of the best security talent and AI-enabled technology results in twenty times more effectiveness in their ability to cover their digital attack surface.

– Efficiency: Humans can accelerate their time to evaluate the breach-worthiness of a vulnerability by at least 73% by using AI-enabled technology.

– Effective Remediation: Companies can find and close critical vulnerabilities 40% faster, reducing the vulnerability risk window by using combined AI technology with recognised security platforms.

AI technology is frequently amalgamated with other data security products to provide a single pane of glass view of the security landscape. One such technology is Security Orchestration Automation and Response (SOAR) platforms that can identify threats in a network or system early and, can remediate by using predefined playbooks which take action to stop the threat and provide feedback to security operation teams for further investigation.

Using AI to monitor user behaviour is another popular area, as in general the end-user is the weak link in any security strategy. Given the speed at which a cyber-attack can take place, AI can analyse user behaviour faster and on a larger scale than humans can, spotting anomalies in the pattern which builds a risk score that when exceeded, can take proactive steps to block the process. The algorithm used, called Self-Organising Map (SOM), models a normal users working, to discern whether a particular activity on a network or computer system is normal or abnormal.

AI is not only for internal networks, it can be used to detect online threats. For example, in fraud detection, where SOM algorithms can learn from historical fraud patterns and detect them in future transactions or, in the build-up to one, such as an increase in false logins or orphan carts in shopping systems.

Another area is in supply chain connections, where the AI platform can monitor the traffic flowing between suppliers and detect any unusual change to a feed or significant change in content. Such systems would have detected the hack of the SWIFT banking network much sooner. This hack affected many banks, in particular the Bank of Bangladesh whose SWIFT connection was compromised by malware, triggering transfers out of the norm.

It is not only the ‘good guys’ who are using AI to defend their networks, many hackers are using AI to beat AI defence systems too. However, unlike traditional cybersecurity approaches, which often remain stagnant once implemented, AI is highly scalable and adaptive. This is important as the ability to scale to hundreds, thousands and even millions of training samples, means as the training dataset grows, the AI solution can continuously improve its ability to detect anomalies. This is a key benefit as, in general, the ‘bad guys’ only need to find one hole, whereas the network manager needs to plug hundreds of holes. Using AI spins this around and, for the first-time, networks can defend all the points of access and adapt as the hackers approach changes. It is fluid security.

There is still fear in fully implementing an AI or SOAR technology as it could make the wrong decision and stop a business in its tracks. According to research by White Hat Security, ‘60pc of security professionals are still more confident in cyber threat findings verified by humans, over those generated by AI.’

The report highlighted the top four areas, where human intelligence trumps AI in the operational security process as: the use of intuition, creativity, human experience, and frame-of-reference. This ‘real life’ experience of AI is yet to be demonstrated, let alone mastered.

It has been reported that Elon Musk, speaking with Demis Hassabis a leading creator of AI, said that his ultimate goal at SpaceX was the most important project in the world: interplanetary colonisation. Hassabis replied that, in fact, he was working on the most important project in the world: developing artificial super-intelligence. Musk countered that this was one reason we needed to colonise Mars — so that we’ll have a bolthole if AI goes rogue and turns on humanity. Amused, Hassabis said that AI would simply follow humans to Mars!

AI is with us and will remain, but will it overcome the challenges to solve problems that are difficult for the computer, but relatively simple for humans? For example, driving a car. A lot of people can drive cars, but for a machine, it is very hard to solve. The potential AI benefits are far-reaching and would make our cyber lives better, giving overall efficiency to our businesses.

The question is, at what cost and how many issues will we face before we can trust the code that runs the AI. Will we ever be able to trust it, will we have to ensure it cannot go too far in its learning and take control of something to attack us? Only time will tell.

Related News

Newsletter

Subscribe to our weekly newsletter to stay on top of security news and events.

© 2024 Professional Security Magazine. All rights reserved.

Website by MSEC Marketing