Runscope API Testing and Monitoring    Learn More →

Crash Avoidance: Stay Ahead of API Problems with API Performance Metrics and Reports

By Neil Mansilla on .

Tests and monitors are useful for providing critical information that services are healthy and running as expected, particularly when notifications fire off when services are down. However, by the time the notification is received, we are immediately thrown into rescue mode. What if we could have avoided the crash altogether by simply analyzing environmental data, spotting the early indicators and making informed decisions that lead us away from the accident?

In the automobile industry, crash avoidance technologies monitor vehicle environmental and driver input data that suggest the potential for a collision. For example, if you’re veering into a lane without your turn signal, or you’re about to change lanes and another vehicle is in your blind spot, the crash avoidance system will deliver a series of audible and visual warnings to catch your attention, helping you to avoid a crash.

For APIs, Runscope provides several reports and testing features with Runscope Metrics, geared to help you detect issues that may be leading indicators to larger problems. Below we explore how to measure latency, the time that it takes for an API to respond to a request, and how Runscope can help you stay out in front of API performance.

Establish a Response Time Baseline 

The first step is to establish a baseline of latency metrics. The Runscope Radar Overview Report generates a comparison of any given result to these base metrics to give an indication of underlying problems. The request performance chart provides response times for different percentiles.

In the graph above, for the past 30 days, the mean response time is 505ms (50th percentile). The 95th percentile is 1149ms, meaning that 95% of the time, the response time is below 1149ms. Use these response times to establish a threshold of maximum response time on a test assertion—meaning that if a response takes longer than X milliseconds, Runscope will send a notification. In this case, we set an assertion stating that the request must take less than 1400ms to respond.

Above, we added a response time assertion on this test request. If the assertion fails, so will the test, and a notification will be sent. 

Keep Regular Tabs on Your API

As much as we can encourage you to check the overview page and performance metrics reports on Runscope, that practice doesn't always fit into a daily schedule. So instead, we created the Daily API Performance Report that's sent straight to your inbox.

The daily email report shown above provides average response times for the day along with a comparison to the previous day’s numbers. If you notice performance anomalies, digging in is just one click away. Clicking on the test name will bring you directly to that test’s Radar Overview Report.

Put Requests Under the Microscope

When you discover a critical error or suspect a performance hotspot, a useful tool for digging in is Runscope Traffic Inspector. Radar test results are stored in the Traffic Inspector log, with complete request and response details.

Above, we can filter down to the hostname and method (path), as well choose specific start and end times for the report, which helps to zoom in and potentially identify problem patterns indicated by escalating latency.

API Crash Avoidance in Practice

We can all agree that it’s better to find and fix small problems early before they grow into catastrophic failures. Establishing a baseline for response latency and setting up an assertion with a response time threshold is a useful first step. Once you've gotten into this practice, a useful next step is to regularly review reports to search for response times creeping up toward that threshold. With Runscope tools and reporting, you can stay ahead of the curve, building better software—not just fixing crashed services.

If you haven't already, sign up for Runscope for free to test out these metrics tools yourself.

Categories: howto, product, debugging

Everything is going to be 200 OK®