In today’s distributed computing environment, continuous integration and delivery (CI/CD) can be challenging given the multitude of dependencies that have to managed and replicated. How does one test systems end to end with multiple backend databases, runs on different operating systems and hosted with different cloud computing vendors?
This is where container technologies comes in. One of the most widely used today is Docker. Docker container are immutable and composable units that has all the required libraries and utilities for your application to run, anywhere, on any operating systems and cloud providers.
It packages multiple applications, from load balancers to applications to databases, into a shareable image that can be distributed and guaranteed to run anywhere.
In this post, we are going to develop a continuous integration workflow using Docker. In our simplified but not entirely unrealistic stack, we have a Flask web application that allows users to create new blog posts and view existing one. As a user writes a post, he/she can either save a draft or save a post. Our web app is backed by postgres where contents of the posts are stored. To alleviate load on our database, when a user is merely saving a draft, the contents gets stored into Redis, a key-value database commonly used as a cache. In our complete setup, we have three distinct components, a webapp, postgres and redis.
In order to write end-to-end integration tests, we would normally have to spin up separate instances of all three components in a CI environment. The problem with this approach is that it’s quite a lot of extra infrastructure and dependencies to manage and ensure working.
A better approach is to leverage the lightweight Docker containers and use Docker Compose to spin up and tear down the integration infrastructure on demand. We will also use Jenkins as a pipeline and build tool to create the local dev -> continuous integration -> deploy workflow.
The overall architecture of the workflow looks like
In this workflow, we would develop our applications in local development environment and use local redis and postgres as usual, but when we are ready to do integration testing, we will kick off a jenkins pipeline that will do the following
check out the latest code base on master from github
docker compose, i.e. webapp, redis, postgres into their own containers and automatically create networking amongst the containers.
run integration tests and if successful
build the docker image and push it to a container registry services, e.g. AWS Elastic Container Registry or Docker Hub.
production containers can be deployed with the latest docker image
Now let’s looks at the details.
First off, our flask app is pretty boilerplate and stores blog post in either redis (drafts) or postgres (final post). The code for route /save/ is here
@app.route('/save/', methods=['GET', 'POST']) def save(): form = BlogPostForm() if form.validate_on_submit(): if request.form['action'] == 'draft': print('Saving to redis') redis_client.set(form.title.data, form.body.data) else: print('Saving to postgres') model = Post() model.title = form.title.data model.body = form.body.data model.date = form.date.data model.author = form.author.data db.session.add(model) db.session.commit() return render_template('new.html', form=form)
Then we create a Dockerfile that tells Docker how to create the container
FROM python:2.7 # Install packages RUN set -ex; \ apt-get update; \ apt-get -y -qq install postgresql ADD . /app WORKDIR /app RUN pip install -r requirements.txt
The steps to create the container are
pull a pre-defined base docker image of python:2.7, which has the basic libraries and dependencies installed to run a python 2.7 application
install some extra libraries like postgresql so our app can work with postgres database
mount our code root directory onto /app directory in the container
pip install python libraries for our applications defined in requirements.txt
Locally we can build the image and run the container
docker build -t flaskapp . docker run flaskapp
But that’s not very interesting as it just run a webapp without using either postgres or redis. What’s next?
The magic ingredient here is docker compose. Docker compose orchestrates and runs multiple containers as a single unit. In this example, our flask web app, redis and postgres all runs in their own containers and has networking to talk to each other. We define a docker-compose definition file. Docker Compose uses yaml format.
version: '3' services: integration_test: build: context: ./ dockerfile: Dockerfile.test volumes: - .:/app depends_on: - web_app links: - web_app environment: - FLASK_ENV=docker command: ["bash", "-c", "sleep 10 && py.test"] web_app: build: context: ./ dockerfile: Dockerfile volumes: - .:/app ports: - "5000" depends_on: - redis - postgres links: - redis - postgres environment: - FLASK_ENV=docker - REDIS_HOST=redis command: ["python", "run_test_mode.py"] redis: image: redis:latest ports: - "6379" postgres: image: postgres:latest ports: - "5432"
It can looks a little intimidating first, but here we have composed 4 docker containers
postgres: we use the latest postgres docker image to build the container. It runs on port 5432 in the container
redis: we use the latest redis docker image also. It runs on port 6379 in the container
web_app: this is our flask app container the definitions of which comes from Dockerfile. Note web_app links both redis and postgres containers so it can access both. It runs on port 5000 inside the container
integration_test: this is our test suite. The container definition comes from Dockerfile.test, which looks very similar to Dockerfile of the webapp. Note integration_test container links web_app so it can access the REST api of the app
Now to our jenkins CI build pipeline. Our jenkins pipeline definition has the following steps.
check out code from github on master
create the 4 docker containers using docker-compose
>> docker-compose -f docker-ci-demo/docker-compose-ci-test.yaml up -d Creating network "docker-ci-demo_default" with the default driver Creating docker-ci-demo_postgres_1 done Creating docker-ci-demo_redis_1 done Creating docker-ci-demo_web_app_1 done Creating docker-ci-demo_integration_test_1 done
As you can see, docker did the heavy lifting and created the containers and the underlying networking as well.
3. Because our test suite runs in its own container and hit the webapp REST api, if the test passes it will return an exit code 0. We use docker wait to check out
docker wait docker-ci-demo_integration_test_1
4. If tests passes, we will build a new docker image to publish to the registry and deploy production
cd docker-ci-demo && /usr/bin/docker build .
5. When all is done, we will tear down the containers and clean up
docker-compose -f docker-ci-demo/docker-compose-ci-test.yaml down
When the pipeline build on Jenkins and passes integration tests we will have an updated Docker image to deploy into production.
That’s it. With Docker we are able to iterate locally and be rest assured that our integration testing has exactly the same infrastructure and dependencies, all without the cost of maintaining a new testing environment. Docker containers are awesome!
As always you can find the full code discussed in this post on Cloudbox Labs github.