Solve API problems fast with Runscope. Learn More →

How to Test Private & Local APIs with the Runscope On-Premises Agent

By Ashley Waxman on .

Different teams need to access and test APIs in different environments and stages of the development lifecycle. Being able to reuse these tests across every stage is critical for effective collaboration and efficiency, and running tests against APIs during development or APIs that live behind a firewall should be just as easy as testing APIs in the public cloud. The Runscope On-Premises Agent downloads easily to your local machine and executes requests in the location that makes the most sense for your infrastructure, all while keeping your test management in the cloud.

In the video below, Neil Mansilla, VP of Developer Relations at Runscope, provides an in-depth demo of the On-Prem Agent and how you can leverage API testing in your organization, no matter where your APIs live. This video covers:

  • How to install the Runscope On-Premises Agent and instantly begin testing APIs behind the firewall
  • Different use cases for testing APIs in a local environment, including testing private APIs and testing APIs in the beginning of the development process
  • Ways to incorporate API testing into your continuous integration (CI) processes

The On-Prem Agent is available during the Runscope free trial and on any paid plan. Give it a try today to begin testing private and local APIs in minutes.

Categories: testing, howto, product

Using Runscope to Test APIs Protected with the Hawk Authentication Scheme

By Gustavo Straube on .

We’re excited to have Gustavo Straube, Software Engineer and Co-founder at both Creative Duo and All Day Use, show the Runscope community how to test APIs that use alternative or custom authentication schemes.

Nowadays, employing authentication protocols with your APIs is a necessity, but dealing with them can be taxing. At Creative Duo, we usually build APIs to act as the backend for mobile apps and modular systems. In both cases, even when using TLS protocol to protect the data, we have to guarantee that only authorized requests get a valid response from any service. In the past, we used a home-grown solution to avoid unwanted access, but it wasn't reliable and writing a trusty security protocol has never have been our focus.

After trying some alternatives, we ended up choosing Hawk authentication scheme for its simplicity and ease of integration. It provides enough safety for our applications and their consumers. The primary design goals of Hawk are to:

  • Simplify and improve HTTP authentication for services that are unwilling or unable to deploy TLS for all resources
  • Secure credentials against leakage
  • Avoid the exposure of credentials sent to a malicious server over an unauthenticated secure channel due to client failure to validate the server's identity as part of its TLS handshake

As we started to configure the first test we found a little problem in our way: the API we were testing uses Hawk for authentication, and we only found options for Basic Authentication and OAuth 1.0.

Checking the available configurations, there was an option to add static headers. However, as a replay protection procedure, authentication headers in Hawk are valid for only one minute. With that in mind, the static header was not an option since we must update the header within one minute. Furthermore, automatic tests would be impossible to run.

Recently we discovered Runscope and its awesome features to monitor and test APIs. After a tweet exchange a few emails with Runscope team, we got a script to start with. Yes! It is possible to write scripts to run before (initial scripts) and after tests. The scripts should be written in Javascript, which is a great choice since a lot of developers know at least a bit of JS. Also, it's possible to use a few common libraries with scripts, like Moment.js and CryptoJS.

Creating the Authentication Header

Before creating the dynamically generated authentication headers, we must set up some variables we'll use in the script.

Now we can start to build the header itself. Hawk protocol is simple: it basically uses an HMAC hash, created with the request info, using a secret key.

Knowing the protocol basics, the following code becomes pretty obvious.

Step 1: Set up the request data into variables.

var now = parseInt(moment() / 1000);
var method = "GET";
var path = "/api/path/to/resource";
var host = "";
var port = 80;
var ext = null;
var nonce = Math.random().toString(36).substring(6);

Step 2: Create the normalized request string, with each value followed by a newline, as required by the protocol.

var artifacts = "hawk.1.header\n" + now + "\n" +
     nonce + "\n" +
     method + "\n" +
     path + "\n" +
     host + "\n" +
     port + "\n\n";
if (ext) {
     artifacts += ext;
artifacts += "\n";

Step 3: Create the hash using the HMAC function from CryptoJS.

var mac = CryptoJS.HmacSHA256(artifacts, variables.get("hawkSecret")).toString(CryptoJS.enc.Base64);

Step 4: Build the authorization header contents.

var header = "Hawk id=\"" + variables.get("hawkKey") + "\", ts=\"" + now +
"\", nonce=\"" + nonce + "\"";
if (ext) {
       header += ", ext=\""+ ext + "\"";
header += ", mac=\"" + mac + "\"";

Step 5: Set the header contents to a variable to use in the request configuration.

variables.set("hawkHeader", header);

Now that we have an authorization header, which changes according to the time, we can add it to our request, using the variable from the script.

Finally, we can run our test, check the response and add some assertions.

Initial scripts run before the request, so we cannot retrieve data from the request to use when creating the header. That is why we set all request parameters in the script (method, host, path, etc.). To avoid modifying the script for each different test configuration, we can create variables for all those settings which change between tests so that we can simply copy and paste the script when needed.

At Ease with API Testing & Auth Tools

Now that we've found an auth solution that works well for us, we're put even more at ease having a tool like Runscope in our arsenal to monitor and test the health all of those APIs. If you have any questions or comments about using Hawk protocol or testing APIs using authentication schemes, feel free to leave a comment below or tweet at me!

You can start testing APIs protected by an authentication protocol by signing up for Runscope for free. 

Categories: code samples, howto, product, monitoring, security, testing, community

This Fortnight in APIs, Release II

By Ashley Waxman on .

This post is the second in a series that collects news stories from the previous two weeks and is curated lovingly by the Runscope Developer Relations team. The topics span APIs, microservices, developer tools, best practices, funny observations and more.

For when you need a macro view of microservices:

We talk a lot about microservices at Runscope, but what does it really mean? In Microservices 101, Emiliano Mancuso explains the what, when, how and who of microservices, along with pros and cons lists for using microservices and ways to get started. If you currently have a monolithic architecture and are toying with making the move to microservices, this article has the graphs and diagrams to help you get started on the right foot.  

For when you want to go old school:

Everything old is new again. As computers and technology have evolved over the past few decades, so have our ways of building things, with more developer tools available than ever. However, Fred Wilson is seeing a resurgence of the command line interface. With the increased reliance on short bursts of communication thanks to Twitter and text messages, we could soon be ordering coffee with just a quick line of code. There is already a growing list of telegram bots that have been created in just a few weeks.

For when your APIs do a better job than you do:

It’s no secret that businesses today are leveraging APIs to power infrastructure, apps and partnerships, but what if they could also employ APIs instead of hiring an engineer? In the article APIs Are the New FTEs, Guarav Jain evaluates how engineering roles are beginning to be replaced with front-end frameworks or other tools, eliminating the need for, and cost of, human capital. APIs are a big part of this shift, as noted in the company Smyte, which was formed by a few ex-Facebook anti-fraud engineers who made “the knowledge of the world’s leading experts available as an API for a fraction of their former salaries”.

For when you find yourself alone with your containers:

At this month’s AutomaCon, the Infrastructure as Code Conference, the Runscope team was on the ground with some of the most innovative and influential minds in DevOps. While we never found one standard definition for “infrastructure as code”, we did make some interesting observations on just how many people really use containers in production and ways DevOps engineers are incorporating security measures into their practices.  

For when you like it so you want to put a price on it:

Setting a price on your product can make or break your business: too high and you lose customers, too low and you can’t cover expenses. Plus finding the right pricing model can be a challenge. Last week, industry heavyweight Michael Dearing, founder of Harrison Metal and investor in Heavybit, PagerDuty and CircleCI, discussed pricing strategies for companies whose main audience is the developer community. This article dives into his five top tips, including understanding perceived value and diversifying your offerings.  

For when you’re shopping around for ecommerce APIs:

The number of opportunities for retailers to sell to consumers has in many ways outpaced the technology to make those experiences streamlined and efficient, until now. Stripe, which already offers one of the most popular payments APIs, has branched out and released Relay, a set of APIs that “makes it easier for developers to build great ecommerce experiences and for stores to participate in them”. With Relay, Stripe aims to solve some of the pains that both consumers and retailers feel when shopping on mobile—pains that lead to shopping sites making up only 15% of purchases on mobile devices. With Stripe’s history of providing excellent developer tools, plus a partnership with Twitter for the release, Relay could be a win-win for the ecommerce market.

For when the ATM isn’t enough fintech for you:

Fintech has been getting a lot of buzz lately, and APIs are fueling big changes and concerns within the finance industry. If you need some fintech 101, Bill Doerrfeld takes you through the history of fintech starting with companies like Kickstarter and Bitcoin, what “making the bank programmable” really means, and explains how the numerous regulations in finance necessitate that fintech companies agree on standardized APIs and self-serving adoption processes. Once you’ve brushed up on the big picture, check out how people in fintech can learn from companies like Uber and Netflix and their “full stack approach” to building a complete, end-to-end product. In Full Stack Banking: How Fintech Will Fuel API-Based Competition, Ron Shevlin notes that APIs are becoming central to the competitive dynamics of the finance industry and fintech companies need to reassess their hierarchy of needs if they want to thrive in the today's technology landscape.

For when you want to easily manage your crypto certificates:

The last time you needed to generate a public/private key pair, you likely searched Google to recall those OpenSSL keygen and certificate request commands. After generating the keypair, how did you manage and protect those keys to the kingdom? Netflix is solving those pains by open-sourcing Lemur, a certificate management framework built for developers. Lemur helps by generating the keys, creating and submitting the CSR, deploying the certificate and securely storing the secret key. Lemur features a nice web-based UI as well as an API, and you can find the source code on GitHub.

For when you like to geek out on dashboards:

Dashboards are a useful tool to communicate raw data in a visually compelling and organized way, but building the right dashboards for your audience can be challenging. Accela, a civic platform for government agencies, created a real-time dashboard for civic data from customers like local government organizations, using the scalable analytics from Connect API and API monitoring and testing from Runscope. Whether you’re looking to leverage government data in your next project or you just want to see how other people are successfully building effective dashboards, this article walks you through each step of the building process and includes a tutorial video.


Notice something we missed? Put your favorite stories of the past fortnight in the comments below, or email us your feedback!

Categories: this fortnight in apis, api ecosystem

What’s Really Going on at the Bleeding Edge of DevOps—Takeaways from AutomaCon

By Garrett Heel on .

We’re big fans of automation at Runscope, and most of the automation we practice is behind the scenes. Runscope is built on more than 70 independent microservices that run in the cloud, and being able to orchestrate and automate those services efficiently is absolutely essential to successfully scaling our products and processes. Last week, we attended AutomaCon in Portland, Oregon, a conference focused on automation for DevOps professionals. Engineers interested in the bleeding edge of DevOps came together to hear from the brains and hands behind some of the most popular automation tools like CoreOS, Chef and Puppet.

While AutomaCon is known as the “infrastructure as code” conference, every presenter put forward a different definition of the concept, making for a diverse and compelling collection of talks. What made the conference particularly noteworthy is that the talks were centered on what the presenters were doing for automation in practice—no theories, no speculation, just real tools and experiences from which the rest of the community can learn. We’ve compiled our learnings from the conference into four key themes that reveal some interesting findings about today’s DevOps ecosystem and where it’s headed.

1. No standard definition for “infrastructure as code”

AutomaCon kicked off with the emcee posing the question, “What is ‘infrastructure as code'?", and nearly every presenter over the course of two days responded in his/her own way. Many times, the definition was created through stories about practical applications. My favorite definition came from Adam Jacob, CTO and Co-founder of Chef:

Infrastructure as code “enables the reconstruction of the business from nothing but a source code repository, an application data backup, and bare mental resource.”

Even though others at the conference didn’t give this exact definition, the way they spoke about automation was in the spirit of this quote, and this definition was the most concrete one I heard all week.

2. Containers and orchestration: Perception vs. reality

Greg Poirier, Factotum at Large at Opsee, presents at AutomaCon.

Greg Poirier, Factotum at Large at Opsee, presents at AutomaCon.

Docker and containerization are dominating engineering and DevOps conversations of late. AutomaCon had some great talks in this arena, and of note was Kelsey Hightower, Product Manager and Chief Advocate at CoreOS, who did a deep-dive into Kubernetes, as well as Greg Poirier, Factotum at Large at Opsee. However, despite the mindshare, Kelsey looked at containerization as just another tool in the DevOps chest, albeit one that is still in its early stages of adoption.

Prior to AutomaCon, I was convinced that containerization and Docker in particular would saturate the discussions. Yet when Kelsey and other speakers did a poll at the beginning of their talks asking how many attendees had tried out Docker, less than half of the crowd raised their hands. Even more telling, when asked how many use Docker in production, nearly all the hands fell. Clearly, even for this crowd of bleeding edge developers and DevOps engineers, containerization is still in its early days.

3. Security isn’t there (yet)

Joseph Damato, Founder of Package Cloud, discusses security at AutomaCon.

Joseph Damato, Founder of Package Cloud, discusses security at AutomaCon.

While the focus of the conference was on automation, presenters made it clear that security cannot be ignored or sacrificed in exchange for benefits of automation. In his presentation, Joseph Damato, Founder of Package Cloud, discussed the fundamental components required for securing automated infrastructure. He also reminded the audience that tools ubiquitous in DevOps are built upon many layers and that we must understand every one of these layers to have confidence in the security of our systems.

4. Death to cut & paste

Many solutions for managing infrastructure as code are in the early adoption phase, so documentation and best practices guides have not yet been sufficiently provided. The steep learning curve to these solutions have led to an unprecedented amount of cut-and-paste configurations, and several speakers discussed the danger of this practice. Relying on a cut-and-paste solution is a quick fix, but precludes you from learning the details and nuances of a framework or tool before considering the solution ready for production.

Luke Kanies, CEO and Founder of Puppet Labs, likened the current state of software automation to that of the evolution of automobile manufacturing. Kanies said that there were dozens of companies in the early 20th century along with Ford Motor Company that implemented manufacturing optimizations. Yet it was Henry Ford’s relentless focus on volume that helped evolve manufacturing, ultimately to Ford’s success. While the automobile manufacturing process was much faster, Ford didn’t sacrifice quality. This parallels to today’s automation tools in that we must not sacrifice quality purely for the sake of automation and scale.


Automation is not new, and in DevOps, there are tons of new ideas and tools coming out. Yet as we learned at AutomaCon, we must not leave behind the care and attention to detail as we move forward into more and more automated processes. We’re excited to take these learnings on the road at our next conference appearance. We’ll be at AWS re:Invent October 6-9 in Las Vegas, and we’d love to chat with you about automation and anything API-related. Sign up for Runscope free and catch us at re:Invent to discuss how to automate your API monitoring and testing processes.

Categories: api ecosystem, events, microservices

Everything is going to be 200 OK