Blog to share our thoughts

The rise of Application Performance Monitoring

Published: 07 November 2017

Over the years, applications have become increasingly complex to manage. With the adoption of Service-Orientated architecture, the Cloud, Hybrid IT and Big Data, applications are now more distributed than ever with hundreds upon thousands being managed by third party services, not to forget the complications around Shadow IT.

The Application Performance Management (APM) industry is advancing rapidly to support these needs, with expectations to reach $5.6 billion by 2020 according to a forecast from Enterprise Software Markets Worldwide 2014 – 2021.

Some history… APM first started appearing in the late 90s, these solutions included the likes of Precise, Wily and Mercury Interactive. Towards the mid 2000s APM tools took a more consolidated approach whereby they were acquired by larger vendors who integrated them into larger suites, often unsuccessfully. Around the 2009 mark, what we call “Modern” APM tools emerged, aimed at the new breed of distributed systems running in virtualized, cloud environments. Major vendors of APM today include tools by New Relic, Dynatrace and App Dynamics (now owned by Cisco).

Alongside Application Performance Management, we often hear the term Application Performance Monitoring. Whilst the Management industry holds a significant place in the market, Monitoring is sometimes dismissed – an easy mistake since there’s only a thin line between the two. Monitoring refers to the collection of performance data whilst Management is a much broader term.

Managed Service Providers are amongst the key players in the market requiring Application Performance Monitoring. To allow for effective customer engagement and quick issue resolution, there must be a tool in place, allowing the MSP to monitor their corporate customers’ requirements. Lack of performance monitoring can lead to costly downtime affecting budgets, operations and infrastructure, as well as disabling employees from carrying out their job.

Service Providers and their corporate customers need to be speaking the same language, using a shared tool with simple graphical information that can be understood by business users and technical teams alike, allowing them to see clearly the quality and performance of key applications.

Decision makers don’t have time to drill down to the nitty-gritty details of applications. An interface which pushes this information to the surface allows users to view application crisis building in real-time. When problems arise, a customer can pick up the phone to their service provider and have the right discussions at the right time leading to proactive issue resolution.

It can’t be stressed enough; not monitoring the performance of your critical applications can be a costly mistake. If sluggish or failed applications can be prevented by good visibility, then it’s a win-win situation for both service provider and customer, bringing them together on a relationship built through transparency.