How can BizTalk anomaly detection be used in troubleshooting?

Posted by AIMS Innovation

serious-technician-looking-at-computer-screen.jpgIt is no surprise to anyone familiar with BizTalk that it is a challenge to operate and maintain cost-efficiently.  Why is that?

It’s because Microsoft does not provide any effective tools that provides a solution to the pain of understanding what’s going on through navigating artifacts and business processes and any performance problems and bottlenecks in BizTalk.  The tools available are antiquated, relies on disconnected tools and manual processes – bound to fail.

Problems with BizTalk are typically not issues with the Microsoft BizTalk Server code but more often results of your BizTalk code, config changes, upgrades, issues with data input and external systems.

Troubleshooting problems relies on insight that can identify and hopefully pinpoint the source of the problem.  Now, if you have an issue with throttling or delay for a specific message pattern (support a business process) and it seems to be caused by CPU and Memory consumption going through the roof it does not really tell you anything about the cause.  With BizTalk you probably have a handful to many dozens or hundreds to thousands… of hosts instances, message patterns and ports and orchestrations.  How do identify the potential cause from all of these components?  And you also need to take into account your change log.  This is hard and this is why existing tools are antiquated, disconnected and relies on human tasks.  Humans are not capable of efficiently consuming, analyzing and concluding on the scope of data that your BizTalk environment includes and processes.

So, what is the solutions?  The solution is to use machine learning technologies, that learns the normal behavior of your BizTalk and looks for anomalies – all-the-time – across all your components and performance parameters (in the thousands of performance counters) and identifies where anomalies occur first and cross-correlated with other anomalies to identify cause and effect.

This information may identify that the root cause of the critical performance issue experience by one message pattern (business process) originates from a non-important batch process that surprisingly is peaking and consuming all the infrastructure resources and impacting the critical message pattern. Shut down that batch process!

That’s the only way to efficiently troubleshoot.  Use anomaly detection insight as a early warning system and you may be able to prevent before you have to firefight. 

errors and anomalies

AIMS installs in 5 minutes, has no performance impact and gives you out-of-the-box self-learning capability and deep insight and root cause analysis based on anomaly detection.

Tags: TECH, POPULAR

Subscribe by Email

Most Popular Posts