Interviews

IT security strategy

by Mark Rowe

It’s all about the numbers, says Brian Chappell, Director, Technical Services EMEA & APAC, BeyondTrust.

Around the time we entered the recent global recession, companies were becoming more risk-averse, cash was tight and investments could no longer have pay-back periods of years. There was also a growing awareness of the risks to corporate infrastructure from external influences. While business has always had an element of risk management in its operation, risk had suddenly become a first-class citizen. Rather than considering the impact of doing something, we started to consider the risk of not doing it and applying that process to every decision, something from which IT security has benefitted.

There has been a lot of trial and error since then (for instance, the companies that learnt the hard way that prevention is better than cure and maintaining a secure environment is much easier than trying to clean up the mess afterward), but security is now generally accepted as an integral part of risk management. What is still a problem is the fact that it can still seem difficult to demonstrate the benefit of a successful IT security strategy when the deliverable is nothing happening. So, if we assume that the way to do that is to demonstrate measurable success, how do we go about doing that?

Measuring IT security risk

I believe that we need a yard stick, something to move us away from qualitative assessment, such as “I’ve got good security” and onto quantitative assessment, for instance “My systems were a 7 and now they are a 3” (assuming that closer to zero is best). Given the number of system that even moderately-sized IT environments have these days this is clearly not something that we can, or would want to, do manually.

What we need is a clear picture of the things that are actually a risk to the security of our IT systems. We need this across all our systems including desktops, servers, routers, switches, mobile devices (smart phones, tablets, laptops) and our virtual environments (private/public cloud). If a system is connected to our network physically, wirelessly or virtually then we need to see where our vulnerabilities lie and, among these, which are the greatest risks. This is where an effective Vulnerability Management Systems (VMS) playing an increasing role.

Vulnerability management

VMSs are not new and traditionally have been applied to specific situations, rather than being a core decision making principle. Historically most VMS would happily scan the business environment and report back that an organisation had many hundreds of vulnerabilities, leaving the in-house security team to work out what mattered and try to fix them.

This was adequate while our servers had single, simple roles to play, such as a file server that stored files, a print server only provided access to printers and a web server presented static pages to the world. These systems, while complex at the time, are like abacuses compared to the modern systems we use. The humble desktop may well have over 100 concurrent threads of activity running at any point it time. When we get to some of the applications servers in production this could well run into the thousands and so can the vulnerabilities. The body of knowledge needed to understand the implications of any one vulnerability is potentially massive.

Fortunately, VMS technology has also become more sophisticated and can provide the necessary information to evaluate and prioritise the vulnerabilities that need action today, against those that may be largely theoretical (in spite of being considered high risk).

The Common Vulnerability Scoring System (CVSS)

Add to that measures such as the Common Vulnerability Scoring System (CVSS), published by the National Infrastructure Advisory Council (US Department of Homeland Security) in 2005 and now under the custodianship of the Forum of Incident Response and Security Teams (FIRST ), and we have the ability to enumerate the risk level. Extend that with analysis from industry professionals and it is then possible to demonstrate improvement in security by showing a reduction in the overall CVSS, asset risk and vulnerability counts for our IT environments making that leap from qualitative to quantitative.

All this allows IT Security to contribute more fully to the overall risk management of the organization by enabling decision-making around which risks can be accepted and which must be mitigated. It is also worth noting that there are striking similarities between the ways that VMS systems work and typical risk management strategies:

1. Identify vulnerabilities

2. Assess the asset risk

3. Identify mitigations

4. Prioritise activities

5. Review and repeat

In today’s risk management environment, senior management teams are only too well aware of the risks inherent in IT systems and the potential impact this could have in these hard times. Being able to create a security condition baseline and measure progress toward a more secure environment is a very real way to reduce corporate risk.

About the author:

Brian Chappell is Director of Engineering for BeyondTrust in EMEA and India; visit www.beyondtrust.com. He has been in IT for over 26 years; he has managed systems providing network services to thousands of users through to global B2B interfaces carrying transactions worth billions of dollars. He has held a number of senior roles in companies such as Amstrad plc, BBC Television and GlaxoSmithKline.

Related News

  • Interviews

    Heritage show

    by Mark Rowe

    A heritage crime exhibition is running at The Collection in Danes Terrace, Lincoln, from October to December 21. The exhibition, which is…

  • Interviews

    Kroll Dubai MD

    by Mark Rowe

    Kroll, the investigations and risk consultancy firm, has appointed John Tudorovic as managing director and head of its financial investigations team in…

Newsletter

Subscribe to our weekly newsletter to stay on top of security news and events.

© 2024 Professional Security Magazine. All rights reserved.

Website by MSEC Marketing