The Simplicity of Serverless Architecture The Simplicity of Serverless Architecture Nov. 29th, 2016 Luke Herrington

The Simplicity of Serverless Architecture

November 29th, 2016

Engineers find solving complex problems exciting. We get a thrill from the prospect of starting a new project, like explorers sailing into uncharted waters or searching for buried treasure. Sometimes, engineers are so compelled by this excitement that we over engineer solutions, believing a complex solution is required to solve a complex problem.

The excitement is not wrong — just misdirected. As I’ve matured as an engineer, I’ve learned that complexity isn’t nearly as compelling as simplicity. I used to think that writing a lot of code was the mark of a skilled engineer. Now I know that deleting code is much more exciting. When energy is focused on the hard work of finding a simple solution for a complex problem, great software is born.

I’m So Excited… But Why?

Whenever a new technology “disrupts” the web industry, I ask myself, “Why am I so excited?” When Facebook announced React, I was captured by the idea of re-rendering anytime the application’s state changes. It was the novel yet simple solution that captured my imagination.

I’m excited about serverless architecture because it’s simple. Upload a function, setup some triggers, and you’re in business. It automatically scales to meet demand and you’re only charged for when the function is actually running. AWS Lambda, Amazon’s foray into serverless, basically works this way:

Multi-Channel Publishing In the Wild

In Four Kitchens’ work with multi-channel publishing, we often need to transform and distribute data to many different platforms. One service we wrote for NBC fetches data from a CMS, transforms it into the expected structure and format, and publishes it for Apple TV, Roku, and Amazon Fire.

When the time came for the migration, the first thing I did was delete around 200 lines of startup code. The code responsible for governing the child processes that ran concurrent stream generation was now unneeded. Lambda concurrently runs your functions on demand whenever your function is triggered.

Next, I adapted the startup code to simply expose a few functions (one for each destination system) that call the underlying transformation code. Setting up the trigger mechanism was as simple as defining a rate for each function in a YAML file.

Finally, I setup a deployment process that runs on Travis which installs dependencies, runs tests, packages up the code, and uploads it to Lambda.

Sidebar: Arguably, this application could have been managed by a Cron task in the first place, but don’t let that distract from the fact that AWS Lambda can do much more than trigger functions on an interval. Functions can be triggered by:

  • dropping a file in an S3 bucket
  • hitting an HTTP endpoint on API Gateway
  • an SNS event – a push notification
  • a Cloudwatch alert – something bad happened in another application


Lambda functions are stateless. Lambda functions run in complete isolation from other functions. In short, this means no connection pooling, which can be a huge boon for performance. There are plans to improve container reuse in the future, but updates to Lambda have been slow in the past. It only recently received a Node 4.3.1 update.

Functions have a five minute max execution time limit. If your function takes more than five minutes to run, you should consider breaking it into multiple functions that call each other.

Native modules must be compiled on a similar system. Earlier, when I said that your code is uploaded to Lambda, I literally meant your entire codebase, dependencies and all. If some of your dependencies require native compilation, running npm install on your Mac machine and uploading it to Lambda probably won’t work. This is because Lambda and your machine run different OSes, each of which needs your native module to be compiled differently. My fellow Web Chef, Elliott Foster, wrote a great deep dive post on this very topic. I solved the native module problem by running the installation and package steps on Travis CI, which currently runs on a similar system as Lambda does.

Vendor lock-in. The doubled edged sword of Lambda is that it integrates so easily with AWS’s other tech. Naturally, it doesn’t integrate with other cloud providers. It’s a brilliant business model.

Simplicity At Its Best

All told, I’m very excited about AWS Lambda and serverless architecture at large. The simplicity of uploading code and setting trigger events mitigates the pain of maintaining and provisioning microservices. Serverless architecture allows engineers to focus directly on the problem at hand, which is the fun of software development in the first place. What do you think about serverless architecture? Let me know in the comments, and make sure to look for me if you’re going to be in Austin at Node Interactive!