Solve API problems fast with Runscope. Learn More →

3 Benefits to Including API Testing in Your Development Process

By Ashley Waxman on .

One of the most common ways we see our customers benefiting from API monitoring is in production—making sure those live API endpoints are up, fast and returning the data that’s expected. By monitoring production endpoints, you’re in the loop as soon as anything breaks, giving you a critical head start to fix the problem before customers, partners or end-users notice.

However, we’re starting to see more and more customers create those API tests during the build process in staging and dev environments. As Matt Bernier, Developer Experience Product Manager at SendGrid, says, “We can actually begin testing API endpoints before they’re deployed to our customers, which means that we’re testing to make sure everything is exactly as we’re promising before we deliver it to them.”

Benefits of Incorporating API Testing in Development

Including API tests in your test-driven development (TDD) process provides a host of benefits to engineering teams across the lifecycle that get passed down to customers in the form of better quality services. There are 3 critical ways that your company will benefit from including API tests in your development process:

1. Test Quality

If you wait until after development to build your API tests, you’ll naturally build them to be biased toward favorable test cases. Once an API or piece of software is built, you’re focused on how it’s supposed to perform instead of the other, equally likely scenarios, in which it will fail. Plus, much like iterating on software during development, iterating on API tests will only make them stronger and more comprehensive, which will benefit the team in the long term.

2. Test Coverage

Covering all the bases of potential software failures is a critical component to maintaining quality product and customer trust. API testing during development can reveal issues with your API, server, other services, network and more that you may not discover or solve easily after deployment. Once your software is out in production, you’ll build more tests to account for new and evolved use cases. Those tests, in addition to the ones you built during development, keep you covered for nearly any fail scenario, which keeps QA and customer support teams from being bombarded with support tickets.

3. Test Reuse

One of the best reasons to create API tests in the early stages is the rewards you’ll feel after deployment in that the bulk of your tests are already taken care of. For instance, Runscope allows you to reuse the same tests in multiple environments, duplicate and share tests. Your dev and QA teams build tests and use them in dev and staging environments, then your DevOps teams can reuse those same tests and run them on a schedule in production to monitor those use cases. DevOps then iterates and adds more tests, which can be reused by dev and QA teams when building out new endpoints. Reusing API tests across the development lifecycle facilitates collaboration among teams and provides a more comprehensive and accurate testing canon.

Using API Testing with CI/CD & TDD

You can incorporate API testing into your development process a couple of different ways. Many of our customers include API tests in their continuous integration (CI) and continuous deployment (CD) processes either with trigger URLs or a direct plugin with Jenkins. If an API test fails during CI or CD, the process is stopped and the API issue must be fixed before the build is complete. Including API tests in this process gives engineering and product teams more assurance that all they’ve covered all the bases before releasing product to customers.

You can also build tests specific to an API that’s in development, similar to how you would when building other software in TDD. Test new endpoints as they’re being built in development and staging, then trigger them to run as part of your CI/CD pipeline.

Learn More In a Free Webinar 

API testing is at the core of API monitoring, which is just running on a schedule the tests you create either in development or post-deployment. Building API tests during development of any software or service has far-reaching benefits across teams, all the way down to how your customer experiences the product.

However, API testing isn’t the only thing to consider during API development. We’ll be hosting a free live webinar on Wednesday, February 17 at 10 a.m. PT, 3 Things Nobody Told You About API Development, featuring Phil Sturgeon, author of Build APIs You Won’t Hate. In this webinar, you’ll learn more about how to incorporate API testing during development, and other tips for building better APIs. Attend this webinar and sign up for a free trial of Runscope, and we’ll give you a free copy of Phil’s book! Reserve your spot today.

Categories: events, testing, product

This Fortnight in APIs, Release X

By Ashley Waxman on .

This post is the tenth in a series that collects news stories, helpful tools and useful blog posts from the previous two weeks and is curated lovingly by the Runscope Developer Relations team. The topics span APIs, microservices, developer tools, best practices, funny observations and more.

For when you can’t drive 55 (without an API):

It’s not uncommon these days to see product releases accompanied by a fresh new API as part of the package. This week, Uber announced the availability of the UberRUSH API, which allows developers to incorporate the company’s new rush delivery service into apps for companies like Google Express, Rent the Runway and SAP, to name a few. Over the past year, Uber has been warming up to developers, offering public APIs, SDKs and more to get its services integrated in as many apps as possible. This article on The Verge provides a solid overview of the UberRush API.

For when your API docs need their own docs:

There have been a lot of great articles lately on strategies and standards around API documentation, and this fortnight is no exception. Taylor Singletary published Writing Great Documentation based on his experiences at Twitter, LinkedIn and Slack. His advice mirrors basic good writing principles that you should incorporate into your API documentation, like maintaining an active voice and highlighting key content. Use these tips, and both your English teacher and developers will be proud.

On the strategy side, James Higginbotham wrote Building Your API Documentation Strategy for SUCCESS on the LaunchAny blog. SUCCESS spells out all the checkpoints you need to take when writing your docs that are easy to do and pay off in the long run. If you read both of these articles, you’ll see that they’re taking their own advice very well—which means an easy and useful read for you.

For when your favorite schema takes Initiative: 

Choosing an API schema or description language is sometimes like trying to decide which child is your favorite (or least favorite). This child is smart and easy to understand, while this one mows the lawn and does the dishes. If you are caught between selecting Swagger (OpenAPI Specification) and another schema, last week’s announcement that Apiary will be support the OpenAPI Initiative can provide some relief.

For when you want to save money and live better by deploying across multiple clouds: 

Well known for helping their customers save money and live better, Walmart is extending its value promise to a whole new segment—engineering and ops organizations. Earlier this week, Walmart Labs open sourced OneOps, their internal cloud management and application lifecycle management platform for developers, allowing them to test and deploy in a multi-cloud environment, freeing them from being locked into a single cloud provider. We wrote about Walmart in our exploration of retail APIs, and it's exciting to see retailers become more innovative on the technology front.

For when the cloud takes over your water cooler chat:

Chat services like Slack and HipChat are quickly picking up adoption beyond just engineering teams and startups (in fact, they’re Runscope’s two most popular integrations). In this SD Times article, Alex Handy discusses how chat services is also giving rise to ChatOps, or the need for managing the seemingly countless messages and notifications coming in through these services. ChatOps Is Taking Over Enterprises explores the history of ChatOps dating back to the 1990s and how it continues to evolve.

For when you need to hear from microservices practitioners and not just the pundits:

Introducing a new architecture or framework into a company is often less about the technology, and more about the people and culture. This holds true for both the proponents and opponents. In this InfoQ interview with two well-respected consultants and practitioners of microservices and Self-Contained Systems titled Microservices in the Real World, you’ll learn about dealing with common “us vs. them” behaviors when applying DevOps, the difference between microservices and SCS, and the importance of application and system metrics.

For when it’s about the journey and the destination:

Migrating huge sets of data to a new database is no easy feat, particularly when you want to do so transparently in the background with zero downtime. That’s why we wrote about lessons we learned during our recent double migration to DynamoDB in a two-part series. Part 1 discusses lessons in schema design, and Part 2 shows the details of our zero downtime approach and how we used Global Secondary Indexes (GSIs). To help out anyone else migrating to DynamoDB, we wrote a Boto plugin to log errors and send them to Runscope for debugging. Happy migrating!

Notice something we missed? Put your favorite stories of the past fortnight in the comments below, or email us your feedback!

Categories: api ecosystem, this fortnight in apis, apis, community

Migrating to DynamoDB, Part 2: A Zero Downtime Approach With GSIs

By Garrett Heel on .

This post is the second in a two-part series about migrating to DynamoDB by Runscope Engineer Garrett Heel (see Part 1). You can also catch Principal Infrastructure Engineer Ryan Park at the AWS Pop-up Loft today, January 26, to learn more about our migration. Note: This event has passed.

We recently went over how we made a sizable migration to DynamoDB, encountering the “hot partition” problem that taught us the importance of understanding partitions when designing a schema. To recap, our customers use Runscope to run a wide variety of API tests, and we initially stored the results of all those tests in a PostgreSQL database that we managed on EC2. Needless to say, we needed to scale, and we chose to migrate to DynamoDB.

Here, we’ll show how we implemented the long-term solution to this problem by changing to a truly distributed partition key.

Due to a combination of product changes and growth, we reached a point where we were being heavily throttled due to writing to some test_ids far more than others (see more in Part 1). In order to change the key schema we’d need to create a new table, test_results_v2, and do yet another data migration.

Key schema changes required on our table to avoid hot partitions.

Key schema changes required on our table to avoid hot partitions.

A Hybrid Migration Approach

We knew from past experience with taking backups that we’d be dealing with most operations on our entire dataset in days instead of minutes or hours. Therefore, we decided early on to do an in-place migration in the interest of zero downtime.

To achieve this, we employed a hybrid approach between dual-writing at the application layer and a traditional backup/restore.

Approach to changing a DynamoDB key schema without user impact.

Approach to changing a DynamoDB key schema without user impact.

Dual-writing involved sending writes to both the new and the old tables but continuing to read from the old table. This allowed us to seamlessly switch reads over to the new table, when it was ready, without user impact. With dual-writing in place, the only thing left to do was copy the data into the new table. Simple, right?

Backing Up & Restoring DynamoDB Tables

One of the ways that AWS recommends copying data between DynamoDB tables is via the DynamoDB replication template in AWS Data Pipeline, so we gave that a shot first. The initial attraction was that, in theory, you can just plug in a source and destination table with a few parameters and presto: easy data replication. However, we eventually abandoned this approach after running into a slew of issues configuring the pipeline and obtaining a reasonable throughput.

Instead, we repurposed an internal Python project written to backup and restore DynamoDB tables. This project does Scan operations in parallel and writes out the resulting segments to S3. Notably, when scanning a single segment in a thread, we often saw a large number of records with the same test_id, indicating that a single call to the Scan operation often returns results from a single partition, rather than distributing its workload across all partitions. Keep this in mind throughout the rest of this post, as it has a few important ramifications.

The backup and restore process.

The backup and restore process.

The backup went off without a hitch and took just under a day to complete. It’s worth noting that, because of the original problematic partition key, we had to massively over-provision read throughput on the source table to avoid too much throttling. Luckily, cost didn’t end up being a major issue due to the short timeframe and the fact that eventually consistent read units are relatively cheap (think ~$10 to backup our 400GB table). The next step was to restore the data into the new and improved table, however this was not as straightforward due to our use of Global Secondary Indexes.

Impact of Global Secondary Indexes

We rely on a few Global Secondary Indexes (GSIs) on this table to search and filter test runs. Ultimately we found that it was much safer to delete these GSIs before doing the restore. Our issue centered around the fact that some GSIs use test_id as their partition key (out of necessity), meaning that they can also suffer from hot partitions.

We saw this issue come up when first attempting to restore backup segments from S3. Remember the note earlier regarding records within a segment having the same partition key? It turns out that restoring these quickly triggers the original hot partition problem by causing a ton of write throttling—to GSIs this time. Furthermore, a GSI being throttled causes the write as a whole to be rejected, resulting in all kinds of unwanted complications.

It's important to remember that hot partitions can affect GSIs too, including during a restore.

It's important to remember that hot partitions can affect GSIs too, including during a restore.

By creating GSIs after restoring, they are automatically backfilled with the required data. During this process, any throttling that occurs is automatically handled and retried in the background while the table remains in the CREATING state. Doing so ensures that usual traffic to the table will not be affected by the restore.

The backfill approach worked, but unfortunately it took a very long time for a few reasons:

  1. Only one GSI can be created (and backfilled) at a time

  2. The backfill caused hot partitions, slowing everything down significantly

My guess as to why we still saw hot partitions during the backfill is that DynamoDB processes records for the index in the same order they were inserted. So while we’re definitely in a better position by having throttling occur in the background, rather than affecting the live table, it’s still less than ideal. Remember that time isn’t the only penalty here—writes cost 10 times as much as read units.

Aside from dropping the GSIs before the restore, the main thing I’d do differently next time would be to shuffle the data between the backup segments before restoring. Shuffling does require a little effort due in this case to the size of the data (~400GB) not fitting in memory, but would've made a significant difference in avoiding write throttling during the backfill to save time (and money).

Post-Migration Savings & Growth

It’s now been a few months since our last migration and things have been running pretty smoothly with the schema improvements. We were able to save more than $1,000 a month in provisioned throughput by not needing to over-provision for hot partitions and we’re now in a much better position to grow.

It’s safe to say that we learned a bunch of useful lessons from this undertaking that will help us make better decisions when using DynamoDB in the future. Understanding partition behavior is absolutely crucial to success with DynamoDB and picking the right schema (including indexes) is something that should be given a lot of thought.

If you'd like to learn more about our migration, feel free to leave a question in the comments section below, and attend the AWS Pop-up Loft in San Francisco tonight to hear our story in person. [Note: This event has passed.]

Categories: community, howto, microservices

API Monitoring for API Management: The Ins & Outs

By Jon Wilfong on .

During my years at Mashery, an API management provider, I had the privilege of helping customers launch and scale successful API programs. Now that I'm at Runscope, I want to share my knowledge of how API monitoring can help companies using any form of API management. Including myself, more than 25% of Runscope employees have previously worked at API management companies, so we often find ourselves discussing the ways in which Runscope can help companies who have deployed API proxies, gateways and ESBs. 

API management and API monitoring are two different but complementary solutions that support your APIs:

  • API management will help you stand up your API, allow the intended audience to procure API keys and help you to manage throttling, access controls and more.
  • Once you’ve stood up your API, API monitoring keeps a constant eye on the API that your developers and company rely on. It does so by running scheduled tests and alerting you to when an API goes down, is slow or is returning incorrect data so you can solve API problems before they impact your customers.

Most of the time, reaping the benefits of API Management requires the insertion of a reverse-proxy or API gateway into your infrastructure and API call flow. Many of these solutions are designed for scale and reliability, but as most of us in the tech industry know, there can always be a time that something fails.

Here are the ways that you can prevent, identify and solve issues with API proxies and gateways a heck of a lot faster by using Runscope.

Monitoring the Ins & Outs

In order to identify the source of an API issue, keeping an eye on all the segments of an API call is critical. Gateways and proxies, whether they’re in the cloud or on-premises, introduce another component in your infrastructure, and thus another component that needs monitoring.

If your APIs are responding too slowly and/or are returning unexpected status codes or incorrect data, this could affect your mobile clients and integrations, potentially hurting your business.

Do you currently have the insight needed to identify where the problem is occurring? Are you confident that you’ll find the problem before your customers or partners do? 

Here's how you can use Runscope to monitor your proxies: 

Step 1: Create tests in Runscope that hit the gateway/proxy. Run them on a frequent schedule. I recommend once-a-minute basic tests that assert on latency, response codes, different HTTP verbs and also validate some expected responses.

Step 2: Create another set of tests which mimic the first tests as closely as possible but direct them to your application server instead of your gateway. You may be able to to skip some authorization and authentication since your gateway likely provides these services.

Note: You’ll most likely want to use Runscope’s 12 global cloud locations as well as the On-Premises Agent in tandem for monitoring both of these scenarios from within and outside of your own network.  The On-Premises agent will execute tests against private and/or unprotected APIs and send results to Runscope’s cloud-based dashboard.

Alerting the Right Team

Option 1: An automated service tells you that something is wrong.

Option 2: Your customer tells you that something is wrong.

Feel free to vote for Option 2, but be prepared for an influx of support tickets that could have been prevented.

Now that you’re running frequent functional tests against different components in your stack, Runscope can notify you and your team when assertions fail before or after hitting your gateway. We’ll even let your team know when and where issues arise via Slack, Hipchat, PagerDuty and more.

Fix It Faster

Reproducing issues is a pain. Have you ever experienced a production issue, tried to get help, and been told, “I don’t see any problems on my side. Can you try reproducing the issue?"

That’s why we have sharable links, which let you solve problems faster within your own team, and even with outside groups. If a scheduled test fails when hitting the API gateway but passes when hitting your services directly, why not send a link of both results directly to your vendor? Getting a clear picture of the request and response should cut out some extra steps, shorten incident time and minimize the amount of effort spent by both you and your vendor.  Everyone wins!

Recap / TL;DR

API gateways and proxies offered by API Management vendors can save your team a huge amount of time and effort, but they can also create another point of failure in your stack. Runscope can help you prevent, identify and solve API problems faster if you have a gateway or proxy in your infrastructure.  How?

Prevent: Run tests on a schedule from multiple locations. Monitor at your gateway as well as at your API source.

Identify: Notifications inform you of problems before your customers find them. Quickly determine where in your stack the problem is getting introduced.

Solve: Inspect the request/response before and after the call hits your gateway. Share the results internally and with your vendors for a faster resolution.

Let us know if you’re using API monitoring in conjunction with your API management solution in the comments and sign up for Runscope for free to start monitoring your APIs—and your API proxies today. I’m happy to help get you started, feel free to email me at with questions!

Categories: api ecosystem, howto, monitoring

Everything is going to be 200 OK