How to Actually Use Docker

(Note: This post is originally from 2019, and is copied to this new site for historical reasons. Opinions, network infrastructure, and other things may have changed since this was first written.)

I’m very late to the containerization trend. I’d heard of it. I’d been familiar with the simplified explanation of “put your program in a box with a really-stripped down Ubuntu install” and kind of knew some names, like Docker and Vagrant, but I never really got around to deploying it in my network. Because up until recently, I couldn’t figure out how you’d actually use it.

I’d read Docker’s documentation. I could follow along with the commands like docker build and docker run. But I couldn’t figure out where Docker actually goes. I had a wrench and I couldn’t figure out which nut it was supposed to tighten.

And now that I’m starting to have an epiphany, I’m putting it somewhere I can find it in case the next guy has the same question.

Step 0: Get the groceries

# choose the appropriate of...
$ emerge app-emulation/docker
$ apt install docker.io

# I don't want to run docker commands as root
$ gpasswd -a lavacano docker

$ rc-service docker start

Step 1: Mix ingredients in bowl

The first thing you’re gonna want to do is create a new environment to start maintaining Docker files, including Dockerfiles. I’m just using a subdir in my home directory for right now.

$ mkdir ~/dockerfiles
$ cd ~/dockerfiles

For every image you want to make, you’ll probably want to make a subdirectory in this dockerfiles directory. Let’s start with test.

$ mkdir test
$ cd test

# we'll want this later
$ touch Dockerfile

We’ll also want to populate this directory with other things your new container will need. This can be just about anything, but generally you’ll want the source code of the application you’re planning to deploy. Since I’ve been trying to learn Rust anyway, I found a crate that implements an HTTP server and slapped together a super basic HTTP responder named testprog to make sure I had something that worked:

// rouille = "3.0.0"
#[macro_use]
extern crate rouille;

fn main() {
    // you want to bind to "all" addresses inside a container
    // because programs will see it coming from 172.17.0.0/24
    // instead of the "real" address
    rouille::start_server("0.0.0.0:8080", move |request| {
        router!(request,

            (GET) (/) => {
                rouille::Response::text("It works!")
            },

            _ => rouille::Response::empty_404()
        )
    });
}

Step 2: Pour into loaf pan

Remember that Dockerfile we touched earlier? Now it’s time to write one.

# We want to start by inheriting a "base image". You could, if you really
# wanted to, start from an empty filesystem and debootstrap an Ubuntu into a
# base image, but it's better to inherit from an officially maintained image
# that gets pulled from Docker's repositories
FROM ubuntu:18.04

# Next, you install Rust.
# The best practice for this is to actually install Rust into a separate image
# and base your application off of *that* but I'm leaving that as an exercise
# for the reader
RUN apt-get update
RUN apt-get install -y cargo

# Set a working directory to start containers in, and copy the files as needed
# I'm just working with a basic Rust crate, so I'll stick it in /opt/testprog
WORKDIR /opt/testprog
COPY ./testprog /opt/testprog
RUN cargo build --release

# Tells Docker that we're going to want to expose port 8080 to the outside
# world. If you don't do this, your application won't be able to receive
# incoming requests, and that's honestly not helpful.
EXPOSE 8080

# This is the command line to run when the Docker container is actually
# launched.
CMD ["cargo", "run" "--release"]

Step 3: Bake at 275⁰F

Actually building the container is really easy.

$ docker build -t test .

Docker keeps intermediate stages of builds in a cache somewhere, so if you change the Dockerfile or your program source code and run docker build again, Docker will start from the highest change in the file. For instance, if the contents of the testprog dir change, Docker will start from COPY ./testprog /opt/testprog.

Step 4: Test for doneness with a toothpick

$ docker run --detach --name test-119 \
    --hostname "test-119.hostname.example.com" \
    -p 8080:8080 \
    test

If you choose not to use --name and --hostname, Docker will generate a container name like kind_brahmagupta and use the container ID (which is based on some kind of hash) for the hostname. You’ll want to set names for orchestration later. The -p argument sets up port forwarding from host networking to container networking. The argument order is “host to container”; you can use -p 80:8080 to forward host port 80 to container port 8080.

Docker will automatically stop the container if the program exits. I never put an exit condition in the test program, so I have to ask Docker to intervene for me.

$ docker container kill test-119

Docker also provides a file copy mechanism, in case you need to retrieve logs.

$ docker cp test-119:/var/log/problems.log ./problems.log

Step 5: Mail it to your friends

Docker can export a container to a tar archive, and you can then send that tar archive to other computers, or even other people.

$ docker export test-119 | xz --compress > test-119.tar.xz
$ xz --decompress -c | docker import - test-image

Docker also has a Docker Hub, which is a sort of central marketplace for containerized software. I didn’t feel like making a new account and password for it though so I haven’t experimented with it much.

  • October 21, 2022