Font Size: A A A

Interviews

Getting real about AI and cyber

Consider a 50,000-strong organisation. It will, typically, have just a handful of people tasked with cyber security. These professionals are responsible for conducting threat hunting, investigating security issues, managing litigation requests, and more. Finding skilled talent to fill these critical roles is increasingly challenging as demand for these roles increases. This same organisation may be dealing with thousands, even tens of thousands of breach attempts every day. The sheer volume of security issues and alerts is overwhelming. In the face of these challenges, we need to think differently about security, writes Anthony Di Bello, senior director of market development for cybersecurity at the AI and analytics product company OpenText.

Enter AI – most commonly short for Artificial Intelligence. AI has been the technology buzzword of 2018, and the purported savior (or bane) of not just security, but digital business in general. But is real artificial intelligence – truly autonomous intelligence that can identify threats, make decision, and remediate issues, all without human interaction – even feasible? The short answer is, not yet.

So, what to expect for AI and cybersecurity in 2019? In the short to mid-term, it is more helpful to think of AI as Augmented Intelligence. In 2019, the enterprise will take concrete steps to apply augmented intelligence through machine learning (ML) to what are currently the biggest challenges in cybersecurity.

The “Iron man, not terminator” approach to AI has been referenced before in defense circles. Enterprise security will take a similar approach applying AI to augment, not fully automate, security. Security tools will amplify the abilities of the analyst starting with more narrow, achievable use cases for threat detection, alert validation, triage and response in the next year. This in part will be driven by privacy regulations (like GDPR) forcing vendors to abandon the black-box approach to AI. Vendors must be more open about what data is captured and analysed by security and AI technology. This in turn pushes vendors to focus on more specific and achievable use cases which in turn is better for enterprise consumers.

To make the case study above more specific, take the example of a banking customer. The bank has a threat-hunting team of two or three analysts sitting behind security dashboards. They are aggregating massive amounts of data and looking for that one data point on a chart that is an outlier. But they have to do this manually, clicking through tables and charts, making manual decisions based on what they know and hoping they are right. Now imagine AI that can ingest massive amounts of data, identify any anomaly, validate the threat, and then prompt the analyst to take action, or in some cases automatically initiate response before damage is done. In the event the new anomaly is not representative of a threat, machine learning will take note for future reference. In the event the anomaly turns out to be a true threat, again machine learning will update and ensure if the anomaly is seen again, the system can initiate automated response protocols and appropriate escalation.

Automating processes with ML and AI can allow security teams to focus on priority threats, deep investigations, without being bogged down by the sheer volume of incoming information. Alert volume, and alert fatigue will continue to grow as challenges for enterprise cybersecurity teams. Automation through ML and AI is going to be critical to reduce fatigue, allow faster detection, and deliver better security outcomes.

What is more, for security teams that are strapped for talent, ML and AI technologies can act as a force-multiplier. With technology to augment human capability, a large enterprise can keep pace without needing an army of highly-trained security specialists. For example, leveraging AI/ML to help sift through the countless false positives generated by prevention and detection technologies, one of the most pressing and achievable uses of such technology.

Automation is already an important part of enterprise security. With machine learning, these systems will evolve from linear automation, to more of a “choose-your-own-adventure” style. Augmented intelligence tools will more effectively present or initiate options for security teams based on attack vector, impact, what stage of attack is detected, and other factors to speed response and remediation time.

AI will, without a doubt, remain a hot buzzword at security conferences and vendor booths in 2019. However, 2019 will also see real growth in the applied application of AI, and especially ML, to address specific near-term use cases like the skills-gap and alert fatigue. As the technologies mature and proliferate, security teams (and unfortunately cyber criminals) will look toward the more future-looking applications of AI.


Tags

Related News