Serverless without New Software
It is generally considered a best practice to use dockerized microservices in your architecture, in order to decouple concerns where possible, make testing easier, and avoid ending up with monolithic software.
I was doing that until recently, my team and I were making microservices everywhere in our startup, which were basically node/expressJS services that exposed a few API endpoints, did something on some data, and communicated over the network.
Thing is, it became quite repetitive, and we kept writing again and again ExpressJS boilerplate.
Moreover, I wanted our software engineers to focus on core business logic, not infrastructure. Ideally, I could just ask them to write node/python functions with input/output, and that would be it.
This sounds a lot like “serverless”, but we cannot use cloud hosted serverless services, such as AWS Lambda, for business reasons. I didn’t want to use open source serverless frameworks (OpenFAAS etc) either, and have to swallow all the hardware abstractions that come with them, since we don’t need those.
We had a celery task queue and workers up and running already (with docker-compose), running python tasks. We simply decided to make a generic “docker run” python task (we run worker containers in privileged mode already, for CI — docker in docker, via socket binding with a container volume mapping*). That way, software engineers can now just write dockerized functions, and the task queue will run them where it can.
And it’s scalable too: we can spawn more celery dockerized workers even on multiple servers, by using a VPN for rabbitMQ/worker connectivity. Moreover, we use Celery Flower to manage our task queue, which exposes an API to launch and monitor tasks.
With this setup (one docker-compose.yml with rabbitMQ + Flower + a few Celery workers with tasks that can perform docker runs in privileged mode), you get the advantages of the serverless mindset (developers just write functions, and they can then be run on the infrastructure via the generated Flower API), with minimal infrastructure work (just set up Celery/Flower + a VPN if you want to distribute workers over multiple servers).
To sum it up, when a new trend (serverless for example) gains traction, I always try to analyse first what are the underlying core ideas, whether they are relevant for our use case, and how to apply them at our scale, without jumping too fast on the cool kidz shiny pieces of software that come along with new trends.
* docker in docker is sometimes considered bad practice, but since almost all CI systems that use dockerized workers (Jenkins, Gitlab CI…) do it, we consider it as an acceptable workaround.
Originally published at fruty.io on May 16, 2018.