- Security TWENTY
- Women in Security
Most enterprises are now using the cloud, taking advantage of the toolsets, flexibility and resilience that the cloud has to offer to gain business advantage, writes Darren Anstee, CTO, SBO International, at the cyber product company NETSCOUT.
In fact, many enterprises are using multiple cloud providers, for exactly these same reasons. There used to be a lot of discussion on how the cloud could reduce costs, but now the focus is more on enabling business agility and how the cloud, or clouds, can allow organisations to evolve more quickly and efficiently as they optimise the IT that drives their businesses forward.
Maximising the business benefit from using the cloud has however required businesses to modify the architectures of their applications – the days of simply ‘migrating’ applications into a cloud compute environment are largely gone. Our applications now take advantage of cloud toolsets and elasticity, and we have all seen multi-tier, microservice ‘cloud-native’ architectures becoming increasingly common. These architectures break up the monolithic applications our IT operations have been used to managing and supporting, making it more complicated for them to get a picture of how well things are really working across disparate environments.
This added complexity has come at the same time as increased focus on the value that IT can bring, as many businesses move forward on their journey through digital transformation. Driving business value from investments in business technology relies on new applications and services operating consistently, as designed. Being able to ensure and report on the performance and reliability of mission critical applications is crucial if a business wants to manage its return on investment.
Telemetry is NOT in short supply, pretty much every application framework, service, toolset and cloud provider can deliver huge amounts of data on what is happening at every step – the challenge is interpreting this information to build a cohesive view. The analogy I like to use is a panoramic picture made up of component pictures; if you take 20 pictures of a major land-mark – say the Tower of London – using 20 different people, with 20 different cameras, then even if they all stood in the same place to take the picture, the result, when you put it all together, is sub-optimal. Yes, it will look like the Tower of London but there will be gaps, inconsistencies, changes in resolution, and so on. This is the challenge many IT teams face in managing their evolving infrastructure and applications across multiple cloud environments.
One solution to this is to use complex analytics to interpret the different base telemetry sets. The problem with this approach is that a lot of assumptions, generalisations and probabilities tend to be used and multiplied across the data pipeline during the normalisation process. We end up with a consistent picture of what ‘may’ be going on – not necessarily what ‘is’ going on. Don’t get me wrong, these solutions can be good at what they do – they will get it right most of the time – but not always.
One way for IT operations to eliminate this problem is to focus their efforts on using a smaller number of consistent data sets for monitoring. This reduces the need for complex data-processing pipelines, and in general can provide a more real-time view of actual application health and performance.
Traditionally IT operations teams have monitored network activity within their data-centres as a primary data-source, to get an understanding of application activity, performance and reliability from the users’ perspective. Network traffic provides a consistent view of activity in any domain, and the good news is that technologies and services exist to obtain and process network traffic from any environment.
However, one of the biggest knock-on effects of cloud is the change in the shape of the communications within and between our applications and users. We used to be focused on monitoring traffic into and out of our applications, to and from our users – what is known as north-south – but now, to really understand what is going on, and to trouble-shoot problems, we need to see what is happening between the micro-services and tiers within our application environments – what is known as east-west.
As mentioned above, monitoring all of this traffic is achievable, but the trick is to do this in a manageable , scalable way – converting network packets into consistent meta-data and KPIs so that the relevant information can be more easily and economically exported between technology domains, and used to drive multiple monitoring, trouble-shooting, security and business reporting use-cases.
From the business perspective, a consistent view of performance and availability across technology domains is hugely useful: we can understand the performance of our applications in different (cloud) environments to ensure that customers and users receive the experience they need as a workloads migrate and scale; we can understand and optimise the inter-dependencies of workloads, applications and services in different (cloud) environments; and, we can provide proactive management of business critical applications to ensure that the technology investments made yield the best possible return.
A clear view across our evolving infrastructures, seamlessly spanning multiple technology domains allows a business to not only maximise their return on technology investment, but also enables greater business agility, a key goal of every organisation.