Solve API problems fast with Runscope. Learn More →

Crash Avoidance: Stay Ahead of API Problems with API Performance Metrics and Reports

By Neil Mansilla on .

Tests and monitors are useful for providing critical information that services are healthy and running as expected, particularly when notifications fire off when services are down. However, by the time the notification is received, we are immediately thrown into rescue mode. What if we could have avoided the crash altogether by simply analyzing environmental data, spotting the early indicators and making informed decisions that lead us away from the accident?

In the automobile industry, crash avoidance technologies monitor vehicle environmental and driver input data that suggest the potential for a collision. For example, if you’re veering into a lane without your turn signal, or you’re about to change lanes and another vehicle is in your blind spot, the crash avoidance system will deliver a series of audible and visual warnings to catch your attention, helping you to avoid a crash.

For APIs, Runscope provides several reports and testing features with Runscope Metrics, geared to help you detect issues that may be leading indicators to larger problems. Below we explore how to measure latency, the time that it takes for an API to respond to a request, and how Runscope can help you stay out in front of API performance.

Establish a Response Time Baseline 

The first step is to establish a baseline of latency metrics. The Runscope Radar Overview Report generates a comparison of any given result to these base metrics to give an indication of underlying problems. The request performance chart provides response times for different percentiles.

In the graph above, for the past 30 days, the mean response time is 505ms (50th percentile). The 95th percentile is 1149ms, meaning that 95% of the time, the response time is below 1149ms. Use these response times to establish a threshold of maximum response time on a test assertion—meaning that if a response takes longer than X milliseconds, Runscope will send a notification. In this case, we set an assertion stating that the request must take less than 1400ms to respond.

Above, we added a response time assertion on this test request. If the assertion fails, so will the test, and a notification will be sent. 

Keep Regular Tabs on Your API

As much as we can encourage you to check the overview page and performance metrics reports on Runscope, that practice doesn't always fit into a daily schedule. So instead, we created the Daily API Performance Report that's sent straight to your inbox.

The daily email report shown above provides average response times for the day along with a comparison to the previous day’s numbers. If you notice performance anomalies, digging in is just one click away. Clicking on the test name will bring you directly to that test’s Radar Overview Report.

Put Requests Under the Microscope

When you discover a critical error or suspect a performance hotspot, a useful tool for digging in is Runscope Traffic Inspector. Radar test results are stored in the Traffic Inspector log, with complete request and response details.

Above, we can filter down to the hostname and method (path), as well choose specific start and end times for the report, which helps to zoom in and potentially identify problem patterns indicated by escalating latency.

API Crash Avoidance in Practice

We can all agree that it’s better to find and fix small problems early before they grow into catastrophic failures. Establishing a baseline for response latency and setting up an assertion with a response time threshold is a useful first step. Once you've gotten into this practice, a useful next step is to regularly review reports to search for response times creeping up toward that threshold. With Runscope tools and reporting, you can stay ahead of the curve, building better software—not just fixing crashed services.

If you haven't already, sign up for Runscope for free to test out these metrics tools yourself.

Categories: howto, product

A Peek Behind the Curtains: Six Lessons from Building a Microservices-led Company

By Darrel Miller on .

Runscope Co-founder and CEO John Sheehan answering questions after his keynote presentation at APIdays Sydney earlier this year. Photo credit.

Runscope Co-founder and CEO John Sheehan answering questions after his keynote presentation at APIdays Sydney earlier this year. Photo credit.

One of the most productive ways to become a better developer is to learn from the successes and failures of others. It is encouraging to see more companies are opening their doors, sharing their stories on how they built their products, and their infrastructure. Recently at APIdays in Sydney, Australia, Runscope Co-founder and CEO John Sheehan shared insights into our experiences building a microservices-based product. We’ve boiled his talk into six key lessons for anyone thinking about building microservices in their organization.

1. Invest in infrastructure

We believe microservices are really a combination of Service Oriented Architecture (SOA) and DevOps. So before building out microservices, consider this warning: if you’re not willing to invest in DevOps infrastructure, then microservices are likely to cause pain.

2. Start small

Runscope started with only two services, but now has more than 40, all cooperating to deliver the Runscope features our users love. Microservices is a long-term investment in flexibility that is best introduced early in a project.

3. The "micro" is up to you

One of the most common questions we hear is, how big is micro? Unfortunately there is no precise answer to the question. The most useful guidance, despite being vague, is simply to try and limit a microservice to perform one single job.

4. Divide and conquer

It is common sense that managing small systems is easier than managing large systems. Smaller systems that interact with each other using HTTP can be combined to solve large problems. By ensuring that these services are independently deployable and share no resources, they can be independently scalable and built to tolerate network failures.

5. Benefit from people-oriented architecture

The same principles that bring benefits to the system architecture also spill over to the human aspects of software development. Teams can focus on single services, reducing the cognitive load for learning, problem solving and adding features. New developers have an easier time learning how systems interact because they all follow the HTTP uniform interface, which is not only functionally agnostic, but also language agnostic.

6. Consistency overcomes complexity

Often the net benefits of dividing large problems into smaller ones are lost due to the increased complexity of integrating components. However, in this case Runscope has been able to leverage the uniform integration interface to build Smart Client and Smart Service tooling to eliminate much of the integration pain. The end result is a net win for Runscope’s employees who can deliver our customers reliable features faster than ever.

Watch John’s full talk from APIdays to learn more about Runscope’s microservices infrastructure and how you can take these lessons to your business. See how Runscope can help you monitor the health of your own microservices by signing up for free. 

Categories: microservices, events

The Next Level in Collaboration: Share API Test Results

By Darrel Miller on .

We’ve talked before about the ability to share HTTP requests with team members and other people outside of your organization. This feature makes communicating about HTTP requests much easier than sending screenshots in emails or trying to describe HTTP header values over the phone. However, sometimes a single request doesn’t tell the whole story. Runscope is built to make collaboration a breeze, and now you can share entire Runscope test results with anyone with our new feature, shareable API test results.

A Link To The Big Picture

Runscope allows you to create a set of HTTP requests that exercise a particular API use case and assert that everything is working as expected. When a test fails, there is a problem to be solved that may require the input of several people. While developers on your team with a Runscope account can already see your tests and results, there may be developers, QA engineers or partners who need to be involved but do not have access to your Runscope dashboard. Being able to share test results containing details of each request and response by simply grabbing a single URL  saves valuable time when trying to communicate about and solve the problem.  

Developers on your team with a Runscope account can already see your tests and results, but what if you need to show the test to developers or QA people on other teams or even in another company? Creating a shareable test result is as easy as toggling the shareable flag at the upper right. By making a test result shareable, you are giving people a read-only view of your test results. This flag can be turned off at any time.

Sharing The Love

Chances are, if you are reading our blog, you’re already a Runscope user and you may have already seen this feature on the dashboard or seen the Changelog email announcing it. What you may not realize is there is another sneaky way you can use this feature to help improve your business.  

We regularly hear from customers who understand the value of using Runscope to test APIs and wish that other teams in their company were doing the same. By sharing test result URLs with other employees in your company, it may be just the thing you need to demonstrate the value of API monitoring. So why not go ahead and share with a coworker who you think could benefit from testing their APIs. 

Categories: howto, product

Microsoft Build, from an API Perspective

By Darrel Miller on .

Last week, I had the opportunity to be Runscope's eyes and ears at the Microsoft Build Conference in San Francisco. Although this is the first Build event that I have physically attended, I have followed closely many of the previous Build and PDC events. I think it is fair to say that this Build has had some of the most significant announcements of any of the past events. Microsoft has made some major course corrections over the past few years and its vision seems to be both internally consistent within its major divisions and also consistent with the goals of its developers.

Build had a number of major announcements and demonstrated several products, many of which I believe will have an impact on the API space.

The crowd during a keynote presentation at Microsoft Build in San Francisco. 

The crowd during a keynote presentation at Microsoft Build in San Francisco. 

ASP.NET Web Framework Goes Cross-Platform

While it wasn’t formally announced, Microsoft presented a number of of sessions on ASP.NET MVC6, its new combined web application and web API framework. With the pending release of Visual Studio 2015, MVC6 is starting to approach release quality. Usually I would complain about framework releases being tied to Visual Studio releases because it indicates a dependency on Visual Studio. However, this release removes any of those doubts—MVC6 not only can be developed and built independent of Visual Studio, but it is also independent of the Windows operating system.

In the API world, being able to develop APIs in a programming language that is independent of an operating system is not new for developers who are familiar with Python, Java, Go and others. However, MVC6 brings a whole new world of possibilities.

What is often forgotten in the discussion of MVC6 becoming cross platform is the fact that MVC6 is now freed from the monstrosity known as System.Web, a binary component whose roots date back to the year 2000. Being free of this legacy system means that the new framework is much more lightweight, more testable and independent of the host server.

Furthermore, previous iterations of ASP.NET web frameworks were dependent on the correct version of the .Net framework being pre-installed on the target server. With the new DNX (formerly K Runtime) execution engine, the correct version of framework components can be deployed alongside the application. This isolation solves many potential deployment, testing and maintenance headaches.

Deploy Web Applications and APIs Using Docker

Further efforts to simplify deployment and scaling were announced with support for deploying ASP.NET Web Applications/APIs to Docker containers. Currently containers are only supported running on Linux VMs, but Windows Containers are supposedly on the horizon. While deploying APIs on Docker containers is nothing new, the fact that Visual Studio can do remote interactive debugging on a Web API deployed to a Linux-based Docker container is quite impressive.

Azure App Services Bring “API Apps”

A large number of the Build announcements were around the new Azure App Service, a set of infrastructure and tooling for hosting, managing and connecting APIs. The App Service allows you to build self-contained Web APIs and deploy them as an "API App". You can also create connector API Apps that expose an interface to a third-party API. The App Service comes with a marketplace for sharing API Apps and a promise of future monetization capabilities.

A common thread among all these API Apps is a Swagger definition of the HTTP resources that they expose. This enables a concept called Logic Apps to provide an orchestration/workflow mechanism between the API Appsthink IFTTT for the enterprise. A purported benefit of API Apps providing a Swagger definition is that Microsoft is able to bake tooling into Visual Studio that allows the generation of client proxy code to allow quick access to the API App. Even so, I'm still waiting for someone to explain why generating client code from Swagger will produce better results than the client code generated from WSDL descriptions, but that's just my jaded perspective.

The frustrating part for me is that Microsoft has designed Workflow description languages multiple times in the past. Even though previous implementations of the workflow engine may not be suitable for running at Azure scale, there should be no need to reinvent the language to describe the workflow. From the demos I saw at Build, the Windows Workflow description language has many more capabilities than the current App Logic workflow description language. Plus it already has a working designer. I hope this isn't another case of curly braces are cooler than angle brackets.

Office 365 Unified API: Showing Signs of Consistency

Microsoft online services have gone through quite a few growing pains over the years. Services have come and gone, and products have been renamed repeatedly. These changes have led to significant confusion in the space. Following Build, I get the sense that things are starting to coalesce into a more consistent set of offerings. I suspect that under the covers, there is still quite a mess that needs sorting out, but at least the story being presented to the customer is more consistent.

This consistency is demonstrated by the new Office 365 Unified API. At the core of this concept is the Graph API which connects together most of Microsoft Office-related APIs and provides a single discovery and authentication mechanism for resources. From a short time working with this API, there is still a fair amount of work to be done, but it does feel like Microsoft is heading in the right direction.

Heading in One Direction

These are exciting times to be involved in the Microsoft community and it is nice to see the vast majority of the ecosystem heading in "One Direction". If you haven’t been following Microsoft’s work in the API space due to their lack of cross platform support or missing cloud offerings, it might be worth your while to take another look.

Runscope has support for importing Swagger files and we have OSS tooling for integrating with Microsoft-based web frameworks. For more of my insights on Microsoft, hypermedia and APIs, visit my blog.

Categories: apis, events

Everything is going to be 200 OK