Runscope API Monitoring    Learn More →

Posts filtered by tag: qa

6 Common API Testing Mistakes (and How to Avoid Them)

By Neil Mansilla on .

If you work with APIs in your day-to-day, then you’ve probably experienced this scenario: you put in several hours to create tests for all your services, and then plug them into your workflow, but somehow you still aren’t aware that your APIs are failing. Either a customer tells you before you find out, your team can’t triage an API issue fast enough, or your tests are passing but your apps still break. If you’ve encountered any of these headaches, the problem may be your process around building and maintaining your API tests. We’ve identified six API testing mistakes that engineers, API developers, QA testers and DevOps teams alike have made so we can help you solve those problems and build better APIs together.

1. Building tests that don’t represent real functional use

It’s easy to set up tests that verify independent services and endpoints, and then call it a day when they all pass. Test Inventory API, check; test Shopping Cart API, check; and so on. However, your end-users are likely consuming these methods in conjunction with one another, not independently. Building tests without considering how the APIs will be consumed may be quicker in the short-term. However, in doing so, you won’t be testing across concerns, which could prevent you from uncovering and debugging potentially serious API issues.

In traditional software testing, tests built across multiple units or functions are called integration tests. Building integration tests for APIs can be easier than you think. Take a typical retail API scenario. We’d test some inventory/SKU management resource methods along with a shopping cart resource method, like this:

  1. GET /items (fetch a list of items)

  2. POST /items (create a new item)

  3. GET /items/{itemId} (verify existence of newly created item)

  4. POST /cart (add this new item to the shopping cart)

  5. DELETE /items (remove the item)

  6. GET /items (verify that the item has been removed)

Each of the APIs used in the above scenario may work when tested independently, but without testing the entire flow, you cannot be sure that they are working together, as intended.

2. Leaving out response time assertions

API tests can be built to check for any number of variables, like status codes and response content. Those pieces may be vital for checking method correctness, but what if an API request is taking 10 seconds to respond? Does this test still sound like it should pass? While often overlooked, response time assertions are a quick but necessary addition to any API test to make sure all your boxes are checked when it comes to a complete end-user experience.

Set up response time assertions that are reasonable and represent how long you or your developers think it should take. If you start at a high threshold, you can easily scale down and see what works for that particular request. A high threshold response time assertion is far better than nothing, particularly when testing production endpoints. An app that takes too long to load can send your consumer off to the next app.

3. Not including API dependencies

Traditional software integration testing involves testing separate units of code together, ensuring that they operate consistently and reliably, together. With modern applications that depend heavily on web services, it is commonplace to rely on web services that live outside of your four walls. Therefore, testing only your own APIs doesn’t give you the whole picture of how your app will operate in the real world. Your API is a product that depends on partner services, and if any of those services fail, your API could also be failing to your customers.

A good rule of thumb: If your product relies on it, then you should test and monitor it. Third-party integrations can be just as valuable as your own APIs, and if your app or service is broken, your consumers won’t know (or care) about whose service is failing.

4. Testing APIs in a vacuum

Building API tests can be a bit of a solo act, but the minute a test is in your workflow, you need to bring in the appropriate teams for when issues do occur. API tests don’t always fail for the same reasons and can impact a variety of stakeholders. Therefore, test failures require the attention of different teams in your organization. If you set up test failure notifications to go to just you, you’re adding time, effort and headaches to your workflow.

The minute an API test is added to your development or operational workflow, involve the right people via the notification channels they use most. Add notifications for the team responsible for remediating API issues by integrating your API tests with Slack, PagerDuty, HipChat and other tools to empower your whole team to solve API problems fast.

5. Ignoring intermittent problems

When your tests come back with 99.99% success, that measly 0.01% failure rate is easy to cop to minor blips that will recover on their own. Errors that occur infrequently and recover quickly become easy to ignore. However, by not digging into the root cause of intermittent problems, or worse, not realizing an increase in their frequency, you could be missing an opportunity to catch a systemic problem early on that may manifest as a much bigger failure later.

Going into traffic logs to debug these issues is an extra step, but it’s a step worth taking. Runscope logs all the API traffic that runs through your tests and allows you to compare requests and results side by side so you can debug intermittent problems quickly, and avoid potentially serious future problems.

6. Managing everything by hand

With so many dev tools out there helping you automate your workflow, there’s no reason to keep creating API tests manually. Doing this work by hand can be cumbersome and take time away from other important work. One way to streamline your test creation process is by importing the work you’ve done in tools like Swagger or importing HAR files. In Runscope, these definitions and files can instantly be turned into API tests.

If you manage large test suites or have complex multi-step tests, break free of the UI and launch your IDE. The Runscope API allows you to programmatically read, create, modify and schedule large numbers of complex API tests and puts test automation and management in your hands.

Start Testing the APIs You Depend On

With these best practices in your arsenal, you’re set to begin testing the APIs that power your business. Sign up for Runscope to start testing your APIs today. You can also join us this morning at 10 a.m. PT for our free live webinar, Getting Started with API Monitoring. [UPDATE: This webinar has passed, but you can watch the recording.] We’ll show you how to create your first API test, notify your team when tests fail and more. If you can’t make it, stay tuned for the recording!

Categories: apis, howto, testing

Democratizing QA: How Automated Testing Tools Empower Teams

By Patrick McKenna on .

We’re excited to have Patrick McKenna, Global Head of Product Engineering at Kurtosys, discuss the evolution of QA with the Runscope community.

Sometimes I wonder how quality assurance (QA) got this bad. Humanity's engineering achievements are nothing short of mind blowing. Buildings that reach the stars, tunnels bored under cities, bridges that span seas. But when it comes to software that doesn't break, quite frankly, we suck.

If you agree with me, the chances are you feel the situation is not getting any better. If you're an engineer, you probably feel like you don't test your software well enough. If you are a product manager, you likely feel like in the time it takes for you to explain how to test your software, you could have just done it yourself. If you run QA, you probably feel that people don't understand what you do or really value it, but gosh, is it hard! And as a CEO, you probably feel like your team doesn’t do enough QA because the last release had an embarrassing bug that showed up during your VC pitch.

If any of that rings true, then welcome to the club. The bottom line: QA is hard, and it is mostly because, up until now, much of the tooling out there has sucked. However, things changed drastically for us when we began relying on both API testing and UI testing solutions that are so easy to use, anyone from a developer to a business person can take them for a spin.

QA-ing Our QA Processes

Not long ago, the best QA solution was to throw people at the problem; pay an offshore outfit bags of cash to write a custom Selenium framework for your app, which unfortunately ceases to be useful the moment you discontinue the contract because, guess what? Maintaining 1,100 Selenium tests is impossible for anyone but a gang of offshore QA guys.

Two years ago, we replaced our monolith system with a microservices stack. We got into trouble pretty fast with integration issues—services at the core would change and everything else would break because of response differences and so forth. The unit tests on the core would pass, but those tests didn't cover all the ways that other services were interacting with them. As a result, sometimes the platform remained broken until someone manually tested it or another service was rebuilt. Sometimes defects went undetected for longer than you would typically like them to, making pinpointing those issues time-consuming.

We realized pretty quickly that the best people to contribute integration tests for a service are the people who depend on them—which isn’t necessarily the people who historically write the tests. We also understood that integration tests need to run constantly, not just at build time in the dev environment, but also at runtime in prod. As the platform grew and was used in ways we didn’t anticipate, defects would appear. Our users became the one mutagenic factor that the build server couldn't truly account for.

Decentralizing QA with Runscope and Ghost Inspector

This notion of democratizing the test capability started to germinate with us. We wanted to empower anyone in our company to write tests, especially the people on our team who maybe didn’t know how to code, but worked very closely with our clients. We started using Selenium and Postman, but the learning curve was too great and they lacked an easy framework for continually running and contributing to the corpus of services we have in our stack.

Once we signed up for Runscope for API testing and Ghost Inspector for browser and UI testing, they quickly bridged this gap for us. At the danger of this sounding liking a puff piece, I have to say that they are incredibly usable pieces of software that let you write functional tests and integration tests for your web app in an absolutely delightful way. I honestly love their software.

However, the victory for us though is how both Runscope and Ghost Inspector have changed the face of QA in our company. We are beginning to grow what I believe is a QA culture, in which people's feelings of dread and malaise around QA are now replaced with feelings of empowerment—even our chairman writes Ghost Inspector tests now!

Ghost Inspector is instrumental here because it allows us to observe the exact experience that a user would have visiting a site or app because of its video capability. Knowing something failed is one thing, but having a recorded video of the failure case, plus browser logs, is quite another. We also use Runscope heavily to monitor our APIs because it gives us probes that we couldn’t easily achieve in any other way.

Cutting through Complexity

When I think about what this all means to me, I'm reminded of something Kent Beck once said, something along the lines of, “Writing software without tests makes me anxious ... and I don't think software developers deserve to feel anxious about their jobs." When you boil down what we do as technologists, we try to make cool stuff that makes people’s lives better and absolutely, positively, never ever breaks. Trouble is, sometimes that last part is a whole lot more consuming than the first, and that's a bummer.

So I guess I'm here to report that QA might be easier than you think nowadays. With tools like Runscope and Ghost Inspector, you can all do it. While you are on the couch, on the way to work, in the shower (well, maybe not in the shower).

Categories: testing, microservices, customers, ghost inspector

Everything is going to be 200 OK®