Blog l AIMS Innovation

Why traditional application performance monitoring doesn't deliver the goods like it used to

Written by Ivar Sagemo | Nov 14, 2017 8:15:32 AM


No matter what industry you operate in, you’re faced with new challenges brought on by digitalization. IT systems are increasingly complex and vulnerable – processing massive amounts of data and events in real-time to support critical business processes. And delivering the kind of experience that today’s customers demand means your systems need to be online and performing 24/7.

It’s no wonder IT organizations are spending more time than ever on monitoring (up to 25%), while downtime costs range from $1-60M USD annually. Despite IT teams dedicating more resources to monitoring, the cost of downtime continues to increase – largely a due to a broken process, reliance on manual tasks and disconnected and antiquated tools.

Downtime costs are primarily from lost productivity (78%), lost revenues (17%) and troubleshooting (5%). Complexity and associated risk will only increase as organizations continue to automate existing manual processes, digitize existing business process and launch new digital business processes.


Traditional APM tools aren’t up to the task

Traditional application performance monitoring tools are flawed, as they rely on manually configuring alerts and on documentation that’s often outdated or missing-in-action. The process usually looks something like this:

  1. Technical teams decide which parameters in the infrastructure or applications should be monitored
  2. Technical teams set manual thresholds (upper and/or lower) for the selected parameters that trigger alerts if breached
  3. Technical teams rely on manually created documentation, which is usually poor quality (because who really has time to create documentation?) or outdated (for the same reason).


Flawed assumptions

So, at the risk of being redundant, why is this flawed? It's flawed because the traditional tools assume that:

  1. The parameters defined are relevant 
  2. Staff continuously (manually) keep parameters up to date
  3. Thresholds that are set represent the cyclical nature of underlying business processes
  4. Technical staff are able to identify cause and effect (correlation) of alerts
  5. Documentation is complete, updated and available

Sound too good to be true? In practice, it often is.


It’s a numbers game

In June, CIO Review reported that an explosion of data and the need for real-time decision making is set to transform every aspect of the enterprise cloud. Artificial intelligence will be a must for IT organizations to cope with new levels of speed, data volumes and complexity, and to deliver business intelligence in real-time.

At AIMS, we see that the number of parameters can easily reach into the hundreds or thousands and likely more. Each of these parameters need thresholds that represent hourly, daily, weekly, monthly and annual business cycles. If you’re running a normal BizTalk environment with...

  • 2 servers
  • 10 hosts
  • 1000 ports and orchestrations

… you’ll need approximately 1 million (yes, 1 million) thresholds in a week, assuming an hourly time resolution. If you need a per-minute resolution, multiply that by 60. It should be brutally obvious: manual configuration of thresholds is impossible. Whether you’re working with BizTalk or SQL, you’ll come up against the same challenges.


AI and machine learning to the rescue

In today’s world, where complex IT systems support business processes that are dynamic, cyclical and constantly evolving, it’s impossible to set up a proactive, innovative IT department using traditional tools relying on manual alert configuration and static thresholds. You absolutely need a modern, automated performance monitoring solution – one that continuously learns and re-learns your IT environment, adjusts thresholds dynamically and predicts anomalies before they occur.


Check if your monitoring  delivers the goods

Download our free  Best Practices Checklist for BizTalk Monitoring now.