Blog l AIMS Innovation

Dynamic or static thresholds – how do they compare?

Written by Adam Walhout | Jul 1, 2014 4:00:00 PM

If you’re manually setting thresholds to monitor your critical business processes, you’re basically drawing a line in the sand that doesn't always correlate to the real needs of your business. In spite of your best efforts, the truth is that manually set thresholds are based on current knowledge only – these static parameters don’t (and often can’t)  take into account future needs of the business.

An impossible task

In any reasonably-sized organization, there are hundreds of thousands of parameters to be scanned across the enterprise. If you’re assigning members of your IT team to dig through your environment in search of static thresholds, you’re setting goals that are essentially not humanly possible to reach. No matter how much experience an IT professional has, no one can realistically be expected to know every normal behavior occurring in your enterprise. And, in the time it would take to determine the thousands of thresholds needed to set alerts, your organization will undoubtedly introduce new applications or technology, thereby negating any previous thresholds set.

Constantly evolving. Just like your business.

Conversely, dynamic thresholds determined by intelligent application monitoring software are constantly evolving. By default, the software begins monitoring everything in an environment as soon as it’s installed, immediately fetching the parameters that are relevant for an organization. The software looks across the enterprise — scanning everything from hardware and hosts to orchestrations and external systems — without stopping to decide exactly what’s important to monitor.

Since it’s based on machine learning, intelligent monitoring software also captures performance data and stores it in a database, and then uses that data to identify error patterns in real time. It records normal behavior patterns, and then adapts to changes in your organization’s business cycle. The software sets a deviation from the norm as the dynamic threshold, and that threshold changes every minute for a specific process, or for that specific parameter.

Intelligent monitoring software also bases its determination of deviant behavior on its cross-correlation abilities. In addition to looking across periods of time to establish abnormal behaviors, the software’s topological map provides easy-to-review insight into correlations across systems. For example, if an alert at the host level impacts something at the server level, those correlating pieces of information are displayed on the software’s dashboard.

 

Case: Dynamic thresholds  dramatically reduce false alerts, free IT resources

A leading insurance brokerage firm found machine learning software to be the solution for monitoring its business-critical enterprise applications and invoicing process. The company needed help optimizing its IT resources to troubleshoot issues and gain business insight into the performance of its BizTalk environment. 

The company had tried several monitoring tools, but found them all to be too complex and lacking in relevant alerts, BizTalk insight and proper reporting of business issues.

By deploying an intelligent monitoring solution, the company obtained immediate feedback. The out-of-the-box solution included 360-degree monitoring of application performance in BizTalk, from high-level infrastructure to individual ports and orchestration. Using algorithms, the solution automatically created dynamic thresholds and robust alerts along with a report of the overall health of the IT environment.

Within days of implementing the intelligent monitoring software, the company significantly reduced the number of irrelevant alerts it had been receiving with its previous monitoring methods. The new software provides the insurance brokerage with live and deep insight into its BizTalk system, and its reporting function allows the company to detect errors and deviations that help speed up the resolution of problems.

In fact, the company reports troubleshooting that used to take days now takes only hours to resolve, and that it no longer requires a dedicated person to perform monitoring. This not only makes for a more efficient operation, but also frees up the organization’s IT talent to focus on more strategic matters.