Runscope API Monitoring    Learn More →

Serverless Computing Platforms—It’s Not All Cloud

By Eric Bruno on .

serverless-edit.png

Eric Bruno is a guest contributor for the Runscope blog. If you're interested in sharing your knowledge with our readers, please fill out this short form and we'll get in touch with you.

When cloud computing came into vogue, some viewed it as simply running software in someone else’s data center, or the proliferation of virtualization. But as cloud has matured, it has become clear that cloud computing, public or private, is an industry-changing paradigm shift.

In a similar way, some view serverless computing as nothing more than a meta-definition for cloud computing. But like cloud computing, serverless means so much more.

For example, Platform-as-a-Service (PaaS) offerings are often too prescriptive and confining, and Infrastructure-as-a-Service (IaaS) can be both too generic and too limiting. The true serverless movement is more abstract, promoting computing in the small (think microservices), right-sized APIs, stateless components, and reliable units of processing that are similar to transactions, yet lighter weight and less restrictive.

Whether these components run on one server or 100, on your desk, or in the cloud should be of no concern. What’s important is that serverless computing enables you to focus more closely on solving a problem without spending time building servers, installing an OS, worrying about patches and upgrades, or network, security issues, and so on.

Serverless Success Defined by Platforms

To help you achieve the goals above, there are serverless platforms and frameworks available, both on-premises and in the cloud. Containers are a step in the right direction, but you still need to configure them, build specifically for them, and think about server deployments. A good Serverless Framework supports deployment to any environment (not just one that supports your choice of container), and to any cloud provider.

Serverless computing may indeed be served by a container implementation in the backend, but you and your developers shouldn’t need to know that. Let’s take a look at some platforms and frameworks available, in a variety of forms, to get a better idea of what serverless truly is.

Hoodie

I’m starting with Hoodie because it shows how far you can go with the serverless abstraction. Hoodie intends to be a complete backend service for front-end developers who don’t want to write server-side code. It supports the needs of most mobile apps, including data storage and retrieval, messaging, payment processing, and more. Using an API-based approach, Hoodie provides a front-end API with local storage support so mobile apps work equally in both offline and connected mode. A plug-in model allows you to extend it with your own APIs, and use those that others have developed.

Hoodie represents good API design in the serverless paradigm, where an API-centric or Function-as-a-Service (FaaS) solution should provide its own purpose, and not simply serve as a layer on top of a server. This includes considering various client application types and other details such as security, network usage and data round-trips. No need to worry about deployment—Hoodie’s functions are already running in the cloud.

The Serverless Framework

Aptly named, this open source framework (arguably the first available) originally written for Amazon Web Services (AWS), but now cloud-provider agnostic, provides everything cloud providers promise while abstracting the cloud itself. You provide the application functions you need, written in a growing choice of languages (Node.js/JavaScript, Python, Java, Scala, and C#), and the Serverless Framework makes them available at scale. You also define the events that launch the functions defined to react to them, and the framework handles cloud-based elasticity, where you pay only for what you need and use.

The benefits of serverless to you include rapid development of user-focused details, automatic cloud-based provisioning and deployment, elastic scale, lack of vendor lock-in, and a FaaS approach that grows with the community of serverless developers.

Apache OpenWhisk

Started by IBM and now an Apache project, OpenWhisk is also the basis for IBM’s serverless cloud platform (discussed below). You define the actions, which are essentially an event-based processing model, link them to other actions and triggers (e.g. a database update, data from a sensor, a mobile app connection, and so on) using almost any modern language, and deploy to the cloud. In seconds, OpenWhisk has you integrated with widely used services such as Slack, YouTube, and community plugins (called packages) deployed to the cloud. Simple rules and sequences allow you to easily orchestrate complex processing models (See Figure 1).

 Apache OpenWhisk events, triggers, rules, and actions orchestrate results (image courtesy of Apache).

The Apache OpenWhisk serverless framework scales by the request, providing fine-grained control over resource consumption (and cost) without exposing you to your cloud provider details. Not only does OpenWhisk isolate you from the servers it runs on, it abstracts the work of integration, which can be a real headache otherwise.

IBM Cloud Functions

Looking for an existing Apache OpenWhisk cloud deployment? IBM Cloud Functions and Bluemix cloud service is an excellent working example, with more than 1 million activations per day, according to RedMonk.

IBM provides nearly limitless scale with its Bluemix public cloud, and offers an attractive pricing model, breaking usage costs down to tenths of a second. It also provides additional language support (e.g. Go), Docker container support, tested and reliable integration with other event providers, schedulable tasks, stream processing, IoT device and service integration, as well as Big Data processing support. With IBM Cloud Functions, you’re not tied into IBM Bluemix cloud services.

AWS Lambda

To many, Lambda is the de facto standard in cloud-based serverless computing. Just as with uploading pictures to a photo sharing service, simply upload your code and AWS literally takes care of every other step. There’s nothing for you to manage. Your code elastically scales, and you pay only for actual usage.

Your uploaded code can be triggered by a variety of events you define. These can be mobile app requests, HTTP and REST-based requests, or from other AWS cloud-based services such as AWS S3. As an example, Listing 1 shows a simple function that creates a customer record, triggered by a write to the sample “Customers” table in an S3 database:

var AWS = require('aws-sdk');
var DOC = require('dynamodb-doc');
var dynamo = new DOC.DynamoDB();

exports.handler = function(event, context) {
 var customer = {
   Id: event.ID, // passed in via PUT body
   notes: event.customerName // passed in via PUT body
 };

 var customerMgr = function(err, data) {
   if(err) {
     console.log(err);
     context.fail('unable to save new customer');
   }
   else {
     console.log(data);
     context.done(null, data);
   }
 };
 dynamo.putItem({TableName:"Customers", Customer:customer}, customerMgr);
};

To deploy a Lambda function, you also create a deployment file that lists resources such as events, and their handlers (functions). To test, trigger the events either by reading or writing to a database, or sending HTTP requests via curl or some other tool. For more details, visit the AWS Lambda tutorials and examples.

Other Serverless Cloud Services

Google Cloud Functions and Microsoft Azure Functions are two additional FaaS public cloud services available with features similar to IBM Bluemix and AWS Lambda. However, Google also includes cloud-based publish/subscribe messaging, and specific services around mobile application support through Firebase, while Microsoft supports additional languages (e.g. C#), and includes specific DevOps integrations. The choice comes down to language and tool support, as well as integration required with other cloud services (e.g. Google Cloud AI, or Azure Cosmos DB) for your specific use cases.

Fn Project

The Fn project, backed by Oracle, is an open source and container-native serverless platform that you can run anywhere. This includes the public cloud, regardless of provider, or even on-premises. It claims support for every programming language, including built-in support for Lambda functions. Fn works by automatically packing your functions within a container, and then deploying them automatically anywhere Docker containers are supported. This can be your development laptop, a data center, the public cloud, or a hybrid scenario.

OpenFaaS and Kubeless

For a more open FaaS platform, OpenFaaS is an independent and open source alternative. It works with Kubernetes and Docker, supports cloud-based clustering, and helps deploy your functions just about anywhere. This includes Raspberry Pi, a laptop, on-premises hardware, or any cloud service that supports either Docker or Kubernetes containers. The result is a scalable, fault-tolerant, event-driven serverless platform with a focus on choice, performance, and simplicity.

Kubeless is an open source FaaS platform, backed by Bitnami, with support for specific cloud vendors. It uses a custom resource definition API that extends Kubernetes, leveraging existing Kubernetes container deployment and monitoring tools.

Fission.io

Another open source serverless alternative is Fission.io, which specifically supports Kubernetes for deployment. With Fission, you write your functions in Python, Node.js, Go, C#, or PHP, then map them to triggers (e.g. HTTP requests), and deploy to your location of choice via one command. All of the container definition and deployment specifics are handled for you. There’s no support for Java yet, but additional language support is on its way.

One handy use case for Fission is ChatOps, which works via webhooks. For instance, as shown in Figure 2, a third-party service such as Slack is integrated via webhooks, triggered by keyword-based events that you define. As a result, the Fission backend routes the events to your functions to act on the events (words or commands entered via Slack).

 Figure 2 - Fission webhook integration enables Slack ChatOps and other third-party integrations.

Serverless Orchestration Frameworks

Backed by Pivotal and used within the Pivotal Function Service, riff is an open source serverless platform to execute functions triggered by events, and can run on-premises and in the public cloud with support for Kubernetes and a wide range of languages. You can deploy functions with riff stand-alone, or on any cloud provider, with unique features such as event streaming, which simplifies workflow orchestration. Other features include support for event scheduling, IoT deployments, web events, and machine learning. Complex event streaming scenarios can be wired up quickly, with data transformations built-in to ease integration.

Spotinst supports public cloud and container deployments, with autonomous orchestration that promises to save both execution-based costs and latency when compared with other serverless platforms and services.

Webtask by Auth0 is another platform alternative, but this one comes with its own web-based editor and function log monitor, allowing you to do everything via a browser. Auth0 uses Webtask internally for their identity management and single sign-on solutions, allowing easy integration and extension.

Most Important: How to Glue it Together

It helps to not have to worry about servers and networks, but the reality is, with serverless, the same problems still need to be solved:

  • What if my service stops (because the server stops)?

  • How many requests per second do I need to provision (and how many servers are needed to handle that)?

  • How do I monitor my API functions and the microservices that contain them (and the servers they run on)?

  • How do I control access to APIs (and their servers)?

  • Keeping track of API versions, dependencies (e.g. database version), other cloud software (e.g. underlying Node.js support), and server OS patch levels.

Unfortunately, traditional monitoring and administration solutions focus on servers, or classifications of software (e.g. application servers or databases). Given that microservices and functions can be hard to manage at scale, you need to monitor code at a more granular level, such as API monitoring with transaction-level reporting, and the validation of payload integrity across API calls (and possibly geography).

In fact, solutions such as Runscope run similar to functions themselves, with schedulable monitors that provide a real-time view into API performance, data validation, and issues with third-party integrations. Proper serverless monitoring can help guide how to distribute your APIs and microservices, and then tune workflows to route requests accordingly (and transparently to the developer).

Categories: serverless

Everything is going to be 200 OK®