Runscope API Monitoring    Learn More →

Migrating to DynamoDB, Part 2: A Zero Downtime Approach With GSIs

By Garrett Heel on .

This post is the second in a two-part series about migrating to DynamoDB by Runscope Engineer Garrett Heel (see Part 1). You can also catch Principal Infrastructure Engineer Ryan Park at the AWS Pop-up Loft today, January 26, to learn more about our migration. Note: This event has passed.

We recently went over how we made a sizable migration to DynamoDB, encountering the “hot partition” problem that taught us the importance of understanding partitions when designing a schema. To recap, our customers use Runscope to run a wide variety of API tests, and we initially stored the results of all those tests in a PostgreSQL database that we managed on EC2. Needless to say, we needed to scale, and we chose to migrate to DynamoDB.

Here, we’ll show how we implemented the long-term solution to this problem by changing to a truly distributed partition key.

Due to a combination of product changes and growth, we reached a point where we were being heavily throttled due to writing to some test_ids far more than others (see more in Part 1). In order to change the key schema we’d need to create a new table, test_results_v2, and do yet another data migration.

Key schema changes required on our table to avoid hot partitions.

Key schema changes required on our table to avoid hot partitions.

A Hybrid Migration Approach

We knew from past experience with taking backups that we’d be dealing with most operations on our entire dataset in days instead of minutes or hours. Therefore, we decided early on to do an in-place migration in the interest of zero downtime.

To achieve this, we employed a hybrid approach between dual-writing at the application layer and a traditional backup/restore.

Approach to changing a DynamoDB key schema without user impact.

Approach to changing a DynamoDB key schema without user impact.

Dual-writing involved sending writes to both the new and the old tables but continuing to read from the old table. This allowed us to seamlessly switch reads over to the new table, when it was ready, without user impact. With dual-writing in place, the only thing left to do was copy the data into the new table. Simple, right?

Backing Up & Restoring DynamoDB Tables

One of the ways that AWS recommends copying data between DynamoDB tables is via the DynamoDB replication template in AWS Data Pipeline, so we gave that a shot first. The initial attraction was that, in theory, you can just plug in a source and destination table with a few parameters and presto: easy data replication. However, we eventually abandoned this approach after running into a slew of issues configuring the pipeline and obtaining a reasonable throughput.

Instead, we repurposed an internal Python project written to backup and restore DynamoDB tables. This project does Scan operations in parallel and writes out the resulting segments to S3. Notably, when scanning a single segment in a thread, we often saw a large number of records with the same test_id, indicating that a single call to the Scan operation often returns results from a single partition, rather than distributing its workload across all partitions. Keep this in mind throughout the rest of this post, as it has a few important ramifications.

The backup and restore process.

The backup and restore process.

The backup went off without a hitch and took just under a day to complete. It’s worth noting that, because of the original problematic partition key, we had to massively over-provision read throughput on the source table to avoid too much throttling. Luckily, cost didn’t end up being a major issue due to the short timeframe and the fact that eventually consistent read units are relatively cheap (think ~$10 to backup our 400GB table). The next step was to restore the data into the new and improved table, however this was not as straightforward due to our use of Global Secondary Indexes.

Impact of Global Secondary Indexes

We rely on a few Global Secondary Indexes (GSIs) on this table to search and filter test runs. Ultimately we found that it was much safer to delete these GSIs before doing the restore. Our issue centered around the fact that some GSIs use test_id as their partition key (out of necessity), meaning that they can also suffer from hot partitions.

We saw this issue come up when first attempting to restore backup segments from S3. Remember the note earlier regarding records within a segment having the same partition key? It turns out that restoring these quickly triggers the original hot partition problem by causing a ton of write throttling—to GSIs this time. Furthermore, a GSI being throttled causes the write as a whole to be rejected, resulting in all kinds of unwanted complications.

It's important to remember that hot partitions can affect GSIs too, including during a restore.

It's important to remember that hot partitions can affect GSIs too, including during a restore.

By creating GSIs after restoring, they are automatically backfilled with the required data. During this process, any throttling that occurs is automatically handled and retried in the background while the table remains in the CREATING state. Doing so ensures that usual traffic to the table will not be affected by the restore.

The backfill approach worked, but unfortunately it took a very long time for a few reasons:

  1. Only one GSI can be created (and backfilled) at a time

  2. The backfill caused hot partitions, slowing everything down significantly

My guess as to why we still saw hot partitions during the backfill is that DynamoDB processes records for the index in the same order they were inserted. So while we’re definitely in a better position by having throttling occur in the background, rather than affecting the live table, it’s still less than ideal. Remember that time isn’t the only penalty here—writes cost 10 times as much as read units.

Aside from dropping the GSIs before the restore, the main thing I’d do differently next time would be to shuffle the data between the backup segments before restoring. Shuffling does require a little effort due in this case to the size of the data (~400GB) not fitting in memory, but would've made a significant difference in avoiding write throttling during the backfill to save time (and money).

Post-Migration Savings & Growth

It’s now been a few months since our last migration and things have been running pretty smoothly with the schema improvements. We were able to save more than $1,000 a month in provisioned throughput by not needing to over-provision for hot partitions and we’re now in a much better position to grow.

It’s safe to say that we learned a bunch of useful lessons from this undertaking that will help us make better decisions when using DynamoDB in the future. Understanding partition behavior is absolutely crucial to success with DynamoDB and picking the right schema (including indexes) is something that should be given a lot of thought.

If you'd like to learn more about our migration, feel free to leave a question in the comments section below, and attend the AWS Pop-up Loft in San Francisco tonight to hear our story in person. [Note: This event has passed.]

Categories: community, howto, microservices

Everything is going to be 200 OK®