Chris Riley is a guest contributor for the Runscope blog. If you're interested in sharing your knowledge with our readers, we would love to have you! Please fill out this short form and we'll get in touch with you.
It’s hard to have a conversation about DevOps these days without someone mentioning serverless computing. Alongside Docker and Kubernetes, serverless computing platforms like AWS Lambda and Azure Functions are providing valuable new ways to deploy applications.
While serverless offers many important benefits, it’s not a cure-all for every type of application deployment woe. Smart developers know that the latest, greatest technologies are not always the best fit for every type of workload.
So, when should you not use serverless? Let’s discuss in this article.
What Is Serverless?
First, a brief explanation of what serverless computing means.
Simply put, serverless is an application deployment solution that eliminates the need to maintain a complete host environment for an application. Instead of having to set up a virtual machine (or even a Docker container) in order to execute code, DevOps teams can simply upload individual functions to a serverless environment and execute them on demand.
This sort of solution has existed since the mid-2000s, but it was not until 2014, with the debut of AWS Lambda, that it became popular and widely known. (In that sense, serverless is similar to containers; containers or container-like technology have existed for decades in the form of chroot, FreeBSD jails and LXC, but it was only with Docker’s release in 2013 that everyone started talking about containers.)
Serverless computing provides several useful benefits:
Code can be executed almost instantly. You don’t have to wait for a virtual machine or container to start.
On most serverless platforms, you pay only for the time that your code is running. In this way, serverless costs less than having to pay for a virtual server that runs constantly, even if the applications it hosts are not constantly in use.
Serverless code can scale massively. Because you don’t have to wait for an environment to start up before launching a serverless function, or to provision more virtual servers when your workload grows in size, the number of requests that you can handle with serverless code without a delay is virtually unlimited.
Reasons Not to Use Serverless
Yet while serverless computing can be advantageous for some use cases, there are plenty of good reasons to consider not using it.
Your Workloads are Constant
Serverless is ideal for deploying workloads that fluctuate rapidly in volume. If you have a workload that is relatively constant, however—like a web application whose traffic does not change by magnitudes from hour to hour—you’ll gain little from serverless.
So, before moving code to a serverless platform just because everyone is talking about it, consider whether the massive scalability and on-demand execution features of serverless will actually help you.
You Fear Vendor Lock-In
Most serverless platforms are tied to particular cloud-computing vendors. Even those that are designed to be pure-play and open source, like OpenWhisk, are not compatible with each other.
A day may come when community standards arise around serverless computing, and they will do for serverless what the OCI has done for containers. But that day has not come.
While it’s possible to migrate serverless workloads from one platform to another, doing so requires significant manual effort. You can’t use Lambda one day and switch to Azure Functions the next.
What this means is that, if you use serverless today, you should expect to be bound to whichever particular platform you use for the foreseeable future. For organizations that loathe lock-in, this could be a compelling reason to steer clear of serverless platforms.
You Need Advanced Monitoring
The relative novelty of serverless as a widely used deployment solution also means that the ecosystem of monitoring and security tools for serverless functions remains immature.
Some vendor tools exist that claim to be able to support serverless monitoring, and more will likely appear over time. But for now, the feature sets remain relatively basic.
If you want robust monitoring solutions for serverless environments, now may not be the time to start using serverless.
You Have Long-Running Functions
One of the main limitations of serverless solutions like Lambda is that each serverless code instance can run for a limited amount of time (five minutes in the case of Lambda).
In the case of most workloads that are good candidates for serverless, this time is more than sufficient. But if you have a workload that is delayed by, for example, network bandwidth limitations, it may not be able to complete in time. You can work around this by chaining serverless instances together, but that’s a clumsy solution, and you’d be better off in most cases by simply sticking to other deployment solutions.
You Use an Unsupported Language
Not every kind of function can be moved to a serverless platform. Most serverless environments support only code written in specific languages. In some cases, you can use wrappers or other tricks to run other types of code, but in general, your options are limited to a core set of popular programming languages.
If you choose to write a given function in a language that is not supported by your serverless platform of choice, then you simply can’t use serverless computing for that particular workload.
Serverless computing is a great thing. The goal of this article is not to knock it.
Instead, the point here is that just because serverless is useful for many types of workloads, it’s not a good fit for all of them. Before jumping on the serverless bandwagon, step back and evaluate whether your workload will actually benefit from the features that serverless enables.