What Are Docker Containers

Robert Thas John
6 min readAug 15, 2019

Find out what a container is, why you need one, and how to set one up using Docker.

Photo by frank mckenna on Unsplash

When I first started programming, life was much simpler. We picked a language like C or BASIC, wrote the code, and either sent the code over or compiled it and sent the binaries over. Back then, you were either using DOS or Linux, and all we needed to know was where our code needed to run. This was in the 90s.

When we started writing software for Windows, things started getting a bit complicated. We had Microsoft Visual C++ for an SDK, and writing the software was pretty much the same. We would build and test on our local systems, and everything would work alright. The problem arose when we needed to send the software to end-users. At this point, we would get lots of emails about missing DLLs. These were files that contained functionality that was embedded in the SDK, but not in the operating system.

Package managers were developed to manage the dependencies. We would point at our compiled binaries and specify what SDK we used, and the managers would fetch every dependency and bundle it into the software installer.

I got introduced to node.js a few years ago. The key highlight was semantic versioning. This lets you make use of libraries or packages and also specify what version of a package your software depends on. Things were still relatively simple back then. Everything went into your package.json.

Things were still simple. Node.js had one major version, and Angular and React were written from scratch.

I went off for a while, and when I returned, node had multiple versions and Angular and React required a node app! You even needed to compile them!!

What does all this mean? Simply that you can’t guarantee that what one developer works on from one computer will run without hiccups on another computer. Yes, you can send them your package.json and tell them what OS you are running. But, sometimes, they might have something installed that just prevents things from working.

Now, if everything works on your system, you can take the route of creating a virtual machine and sending that to users. And, quite a few pieces of software are bundled as virtual machines. The major concern with this is the size of the resulting virtual machine. You will need to install an entire operating system, and then install everything else: dependencies, databases, etc.

What if there was a way of bundling something smaller? There is, and it’s called a container. It is a new approach to configuring computing resources. This approach is called Infrastructure-as-Code, IaC. One popular container management system is called Docker.

Docker — Infrastructure as Code

Docker makes use of a configuration file that is called a Dockerfile. This lets you specify an operating system, any commands that you want to run on the fresh installation of the OS, and any files that you would like to copy into the VM, and finally, what commands to run to get your app off the ground. The following is an example of a Dockerfile.

In line 2, we specify a parent image. This image implements Python version 2.7. We know that what we want is an OS, but here we specify that we want Python 2.7. What is going on here is that someone has created an image with a base operating system that is normally Debian, but could be something else.

To find out more about this container, you can visit hub.docker.com and search for the image. Also, notice that the image had a -slim suffix, which specifies that we want to make use of a streamlined version of the container. Container images can sometimes contain packages we don’t need. Slim containers can be 60mb in size, while full-fledged containers could be 600mb in size.

Dockerfiles are normally placed in the root folder of your application or project. We need to specify where our application will be placed inside our container. On line 5 we specify that our application will be placed in the /app folder in our container.

Next, on line 8, we copy the contents of our application folder into /app inside the container. Next, we need to install any requirements. This is done on line 11.

Our app will be accessed via port 80 from a browser, so we need to expose the port. The beauty of containers is that the firewall blocks all ports by default. We open up port 80 on line 14.

If you are into environment variables, you might need to set one. We do that on line 17.

Finally, we run our python script on line 20.

All of that goes into our file, but we don’t have a container image yet.

Build Your Image

You need to install docker, and you can do that from docker.com. Afterward, you can build your docker image using the following line

docker build -t my_image .

The final period is there to point at the location of your Dockerfile. When done, you can search for images using the following line of code

docker image ls -a

When you have an image, you can test it using the following command

docker run -p 4000:80 my_image

The -p parameter specifies what port is open in the container (80) and what port will be exposed to the web (4000).

You need to share this image, but before doing that, you need to visit hub.docker.com and create an account, then come back and tag your image. You can do that using the following:

docker tag my_image username/repository:tag

Finally, push your image to the Docker registry.

docker push username/repository:tag

Working with Docker can prove to be resource-intensive, requiring both memory and bandwidth. You might also need to keep your containers private and manage who has credentialed access. That is somewhere that Google Cloud Platform comes in handy.

Cloud Build and Container Registry

You can outsource the build of your container images to the cloud. To do this, you will need to have an account on https://cloud.google.com. Log in and you can proceed to the next step.

You can submit your Dockerfile and other application files to the cloud using the following from the cloud shell.

gcloud builds submit --tag gcr.io/[PROJECT-ID]/my-image

[PROJECT-ID] refers to your GCP project ID, which you can find on the home page of your project console. If all goes well, your build will be stored in Google’s container registry. To see any images you have there, run the following command

gcloud container images list

To download and run the image on your system, run the following command

docker run -d -p 8080:8080 gcr.io/[PROJECT-ID]/my_image

If you get an authentication error, run the following command

gcloud auth configure-docker

Scale-Out

Containers let you launch them whenever you need to. As a result, they are good candidates for handling variable loads. You can spin up a container to handle web traffic, and when traffic increases you can spin up more containers to handle requests. When the traffic goes down, you can shut down the containers you don’t need, thus freeing up resources.

If that sounds difficult, it’s because it isn’t the job of a software developer. Instead, it’s handled by infrastructure engineers. There are various options for scaling-out and managing Docker containers. Docker itself provides Swarms. You need your own physical or virtual servers to set up a swarm cluster.

If you work on the Google Cloud, you have two options available to you, namely Cloud Run and Google Kubernetes Engine. I will write an article on scaling out your containers on GCP in the future, so keep an eye out for it.

--

--