Skip to content

Latest commit

 

History

History
1948 lines (1451 loc) · 60.3 KB

docker.markdown

File metadata and controls

1948 lines (1451 loc) · 60.3 KB

Docker

docker basics:

Why You Should Stop Installing Your WebDev Environment Locally - Smashing Magazine

General

Overview

  • docker has a client-server architecture;
  • client and server can be on the same system or different systems;
  • client and server communicates via sockets or a RESTful API;

Docker Architecture Overview

a more detailed view of the workflow

Docker Workflow

Commands

# show installation info
docker info

# search images
docker search <query>

# monitoring (show container events, such as: start, network connect, stop, die, attach)
docker events

# it provides various sub-commands to help manage different entities

# image, container
docker [image|container|volume|network] <COMMAND>

# service, stack, swarm
docker [service|stack|swarm|node|config] <COMMAND>

# other
docker [plugin|secret|system|trust] <COMMAND>

Images vs. Containers

A container is a running instance of an image, when you start an image, you have a running container of the image, you can have many running containers of the same image;

Images

  • created with docker build;

  • can be stored in a registry, like Docker Hub;

  • images can't be modified;

  • a image is composed of layers of other images, allowing minimal amount of data to be sent when transferring images over the network;

    for example, in the following Dockerfile, each line creates a new layer above the previous layer

    FROM ubuntu             # This has its own number of layers say "X"
    MAINTAINER FOO          # This is one layer
    RUN mkdir /tmp/foo      # This is one layer
    RUN apt-get install vim # This is one layer
    

    Docker layers

  • commands

    # list images
    docker images
    
    # list images, including intermediate ones
    docker images -a
    
    # build an image, from the Dockerfile in the current directory
    docker build -t einstein:v1 .
    
    # show history (building layers) of an image
    docker history node:slim
    
    # inspect an image
    docker inspect node:slim
    
    # remove image
    docker rmi [IMAGE_ID]
    
    # remove dangling images
    docker image prune
    
    # start the image in daemon mode, expose 80, bind it to 8080 on host
    # '--expose' is optional here, 80 is exposed automatically when you specify the ports mapping
    docker run [--expose 80] -p 8080:80 -itd my-image echo 'hello'
    
    # bind only to the 127.0.0.1 network interface on host
    docker run -p 127.0.0.1:8080:80 -itd my-image echo 'hello'
    
    # give the container a meaningful name
    docker run --name my-hello-container -itd my-image echo 'hello'
    
    # access the shell of an image
    docker run -it node:slim bash
  • about image tags

    docker images
    
    REPOSITORY          TAG                 IMAGE ID            CREATED             SIZE
    node                9.7.1               993f38da6c6c        4 months ago        677MB
    node                8.5.0               de1099630c13        10 months ago       673MB

    an image's full tag is of this format [REGISTRYHOST/][USERNAME/]NAME[:TAG], the REPOSITORY column above is just the NAME part, you specify a tag with -t option when building an image, the version tag will be latest by default

Containers

  • conatiners can be started and stopped, the filesystem changes are persisted in a stopped container, they are still there when the container restarts;

  • you can create a new image from a container's changes with docker commit;

  • commands:

    # list running containers
    docker ps
    
    # list all containers
    docker ps -a
    
    # inspect a container
    docker inspect <container>
    
    # get a specific value using a Go template string
    docker inspect testweb --format="{{.NetworkSettings.IPAddress}}"
    
    # start/stop/restart/pause/unpause/attach
    # attach: connecting local stdin, stdout, stderr to a running container
    # pause|unpause: pause or unpause running processes in a container
    docker start|stop|restart|pause|unpause|attach <container>
    
    # show the output logs of a container
    docker logs <container>
    
    # convert a container to an image file
    docker commit -a 'Albert Einstein <[email protected]>' -m 'theory of relativity' <container> einstein/relativity:v1
    
    # execute a command in a running container
    docker exec node-box "node" "myapp.js"
    
    # access the command line of a running container
    docker exec -it [CONTAINER] bash
    
    # remove a container
    docker rm [CONTAINER_ID]
    
    # force remove a running container
    docker rm -f [CONTAINER_ID]

Dockerfile

Example

FROM ubuntu:xenial
MAINTAINER einstein <[email protected]>

# add a user to the image and use it, otherwise root is used for everything
RUN useradd -ms /bin/bash einstein
USER einstein

# install required tools, it's a good practice to chain shell commands together, reducing intermediate images created
RUN apt-get update && \
    apt-get install --yes openssh-server
# run an image using a specified user (`0` for `root`)
docker run -u 0 -it <image> /bin/bash

RUN

  • RUN will execute commands in a new layer on top of current image and commit the results, the resulting image will be used for the next step in the Dockerfile;
  • the command is ran by root user by default, if a USER directive is present, following RUN commands will be ran by that user;

it has two forms:

  • RUN <command> (shell form)

    • use /bin/sh -c by default on Linux;

    • the default shell can be changed using the SHELL command;

    • you can use a \ to continue a single instruction on the next line

      RUN /bin/bash -c 'source $HOME/.bashrc; \
      echo $HOME'
      
      # equivalent to
      RUN /bin/bash -c 'source $HOME/.bashrc; echo $HOME'
      
  • RUN ["executable", "param1", "param2"] (exec form)

    • make it possible to avoid shell string munging, and to RUN commands using a base image that does not contain the specified shell executable;
    • it's parsed as a JSON array, so you must use double-quotes ";

the cache for RUN instructions isn't invalidated automatically during the next build, use a --no-cache flag to invalidate it

CMD

  • CMD sets the command to be executed when running the image, it is not executed at build time;
  • arguments to docker run will overide CMD;

has three forms:

  • CMD ["executable", "param1", "param2"] (exec from, preferred)

    • must use double quotes;
    • the "executable" must be in full path;
  • CMD ["param1", "param2"] (as default params to ENTRYPOINT)

    • in this form, an ENTRYPOINT instruction should be specified with the JSON array format;
    • this form should be used when you want your container to run the same executable every time;
  • CMD command param1 param2 (shell form)

differencies to RUN

  • RUN actually runs a command and commits the result, CMD does not execute at build time, but specifies what to be ran when instantiating a container out of the image;
  • there can be multiple RUN command in one Dockerfile, but there should only be one CMD;

ENTRYPOINT

like CMD, it specifies an application to run when instantiating a container, the difference is ENTRYPOINT is always ran (like always run apache for an apache image), can't be overriden, even you specify a command to run in the command line

...

# CMD and ENTRYPOINT can be used together, when both are in exec form
CMD ["CMD is running"]
ENTRYPOINT ["echo", "ENTRYPOINT is running"]
docker run <imagename>

# CMD params are appended to the ENTRYPOINT exec
ENTRYPOINT is running CMD is running

ENV

ENV MY_NAME gary

add environment variable in the container, it's system wide, no specific to any user

ADD vs. COPY

they are basically the same, only difference is ADD will extract TAR files or fetching files from remote URLs

ADD example.tar.gz /add     # Will untar the file into the ADD directory
COPY example.tar.gz /copy   # Will copy the file directly

files pulled in by ADD and COPY are owned by root by default, DON'T honor USER, use a --chwon flag to specify user:

ADD --chown=someuser:somegroup /foo /bar
COPY --chown=someuser:somegroup /foo /bar

Syntax:

For either <src> or <dest>, if its a directory, add a trailings slash to avoid any confusion:

# copy a file to a folder
copy package.json /app/

# only copy the files in src/ to /var/www/, not src/ folder itself
COPY src/ /var/www/

# this will create /var/www/src/ folder in the image
COPY src/ /var/www/src/

.dockerignore

config what files and directories should be ignored when sending to the docker daemon and ignored by ADD and COPY

.gitignore

# ignore any .md file
*.md

# but include .md files with a name starts with 'README'
!README*.md

# ignore any .temp file that's in a one-level deep subfolder
*/*.temp

# any file with a '.cache' extension
**/.cache

Build images

BuildKit

Improvement on performance, storage management, feature functionality and security.

To enable BuildKit: DOCKER_BUILDKIT=1 docker build . or set it in /etc/docker/daemon.json:

{ "features": { "buildkit": true } }

Use secrets in docker build

  • Syntax is similar to use Docker Secrets at runtime
  • This is a secure way to use secrets while building image, because the secrets won't be saved in the history of the final image
  • # syntax = docker/dockerfile:1.0-experimental this comment is required to enable this feature
# syntax = docker/dockerfile:1.0-experimental
FROM alpine

# shows secret from default secret location:
RUN --mount=type=secret,id=mysecret cat /run/secrets/mysecret

# shows secret from custom secret location:
RUN --mount=type=secret,id=mysecret,dst=/foobar cat /foobar
docker build --no-cache --progress=plain --secret id=mysecret,src=mysecret.txt .

Data Storage

By default, all files created inside a container are stored on a writable container layer:

  • Those data doesn't persist, it's hard to move it out of the container or to another host;
  • A storage drive is required to manage the filesystem, it's slower comparing to writing to the host filesystem using data volumes;
  • Docker can store files in the host, using volumes, bind mounts or tmpfs (Linux only);

Storage types

Volumes

  • Created and managed by Docker;
  • You can create/manage them explicitly using docker volume commands;
  • Docker create a volume during container or service creation if the volume does not exist yet;
  • Stored in a Docker managed area on the host filesystem (/var/lib/docker/volumes on Linux), non-Docker processes should not modify it;
  • A volume can be mounted into multiple containers simultaneously, it doesn't get removed automatically even no running container is using it (So volumes can be mounted into a container, but they are not depended on any conatiner);
  • Volumes can be named or anonymous, an anonymous volume get a randomly generated unique name, otherwise they behave in the same ways;
  • They support volume drivers, which allow you to store data on remote hosts or cloud providers;
  • Best way to persist data in Docker;

Data Sharing

data sharing

If you want to configure multiple replicas of the same service to access the same files, there are two ways:

  • Add logic to your application to store data in cloud (e.g. AWS S3);
  • Create volumes with a driver that supports writing files to an external storage system like NFS or AWS S3 (in this way, you can abstract the storage system away from the application logic);

Use volumes

# `awesome` is a named volume is NOT removed
# `/foo` is an anonymous volume
# use `--rm` to let Docker remove the anonymous volume when the container is removed
#   named volumes are not removed
docker run --rm -v /foo -v awesome:/bar busybox top

# remove all unused volumes
docker volume prune

-v, --mount and volume driver

# start a container using `-v`
docker run -d \
  --name devtest \
  -v myvol2:/app \
  nginx:latest

# start a service using `--mount`
docker service create -d \
  --replicas=4 \
  --name devtest-service \
  --mount source=myvol2,target=/app \
  nginx:latest

# install a volume driver plugin
docker plugin install --grant-all-permissions vieux/sshfs

# use a volume driver
docker run -d \
  --name sshfs-container \
  --volume-driver vieux/sshfs \
  --mount src=sshvolume,target=/app,volume-opt=sshcmd=test@node2:/home/test,volume-opt=password=testpassword \
  nginx:latest

Backup, restore, or migrate data volumes

You can use --volumes-from to create a container that mounts volumes from another container;

  • Volume from dbstore is mounted at /dbdata, current folder is mounted to /backup, use tar to pack /dbdata to /backup/backup.tar

    docker run --rm --volumes-from dbstore -v $(pwd):/backup ubuntu tar cvf /backup/backup.tar /dbdata
  • Restore backup.tar in current directory to a new container dbstore2

    # create dbstore2 and a new volume with it
    docker run -v /dbdata --name dbstore2 ubuntu /bin/bash
    
    # restore the backup file to the volume
    docker run --rm --volumes-from dbstore2 \
                    -v $(pwd):/backup \
                    ubuntu \
                    bash -c "cd /dbdata && tar xvf /backup/backup.tar --strip 1"

Bind mounts

  • Available since early days of Docker;
  • A file or directory on the host machine is mounted into a container;
  • It does not need to exist on the host already, Docker will create it if not exist;
  • Can be anywhere on the host system;
  • May be important system files or directories;
  • Both non-Docker processes on the host or the Docker container can modify them at any time, so it have security implications;
  • Can't be managed by Docker CLI directly;
  • Consider using named volumes instead;

Commands

docker run -it -v /home/gary/code/super-app:/app ubuntu

tmpfs

  • Not persisted on disk;
  • Can be used by a container to store non-persistent state or sensitive info;
    • Swarm services use tmpfs to mount secrets into a service's containers;

Usage

  • Use -v or --volume to mount volumes or bind mounts;

  • In Dokcer 17.06+, --mount is recommended, syntax is more verbose, and it's required for creating services;

  • Volumes are good for:

    • Sharing data among multiple running containers;
    • When the Docker host is not guaranteed to have a given directory or file structure;
    • Store a container's data on a remote host or a cloud provider;
    • You need to backup, restore or migrate data from one Docker host to another, you can stop the containers using the volume, then back up the volume's directory (such as /var/lib/docker/volumes/<volume-name>);
  • Bind mounts are good for:

    • Sharing config files from the host machine to containers, by default Docker mount /etc/resolv.conf from host to container for DNS resolution;
    • Sharing source code or build artifacts between a development env on the host and a container (Your production Dockerfile should copy the production-ready artifacts into the image directly, instead of relying on a bind mount);
  • If you mount an empty volume into a directory in the container in which files or directories exist, these files or directories are copied into the volume, if you start a container and specify a volume which does not already exist, an empty volume is created;
  • If you mount a bind mount or non-empty volume into a directory in which some file or directories exist, these files or directories are obscured by the mount;

Logging

  • Docker comes with a few logging drivers, such as json-file, syslog, journald, fluentd, awslogs, ...
  • json-file is the default driver, the log is saved in /var/lib/docker/containers/<containerId>/<containerId>-json.log, and you can use docker logs <containerId> to see the logs
  • For Docker CE, you can only use docker logs with json-file and journald, not with other driver

journald

# use journald driver, and include a container label in the log
docker run \
    --log-driver=journald \
    --log-opt labels=label.garyli.rocks \
    --label label.garyli.rocks=mylabel \
    -p 8888:80 \
    --name=garyTest \
    nginx

# retrive the container log, filter by container name
journalctl -o json-pretty -f CONTAINER_NAME=garyTest

# or filter by label
journalctl -o json-pretty -f LABEL_GARYLI_ROCKS=mylabel

A journald log entry in json format includes some system properties, such as timestamp, hostname, etc, and the container's id, name, image name and the specified label LABEL_GARYLI_ROCKS are all captured:

{
  "__CURSOR" : ...
  "__REALTIME_TIMESTAMP" : "1600256254650198",
  "_BOOT_ID" : "b3ee38a4933a49b4964691b093630125",
  "PRIORITY" : "6",
  "_HOSTNAME" : "gary-tpx1",
  "_COMM" : "dockerd",
  "_EXE" : "/usr/bin/dockerd",
  "_SYSTEMD_UNIT" : "docker.service",
  "_TRANSPORT" : "journal",

  ...

  "LABEL_GARYLI_ROCKS" : "mylabel",
  "IMAGE_NAME" : "nginx",
  "CONTAINER_NAME" : "garyTest",
  "CONTAINER_TAG" : "eb0738878674",
  "SYSLOG_IDENTIFIER" : "eb0738878674",
  "CONTAINER_ID" : "eb0738878674",
  "CONTAINER_ID_FULL" : "eb0738878674ffe8879...",
  "MESSAGE" : "172.17.0.1 - - [16/Sep/2020:11:37:34 +0000] \"HEAD / HTTP/1.1\" 200 0 \"-\" \"curl/7.58.0\" \"-\"",
  "_SOURCE_REALTIME_TIMESTAMP" : "1600256254650178"
}

fluentd

  • Fluentd is an open source log processor, it can collect logs from multiple sources, process them and send them to multiple destinations;
  • Fluent-bit is a light weight version of Fluentd

See an example here: https://programmaticponderings.com/2017/04/10/streaming-docker-logs-to-the-elastic-stack-using-fluentd/

The following docker compose files specify fluentbit and nginx services, nginx service's logs are sent to fluentbit, from there you can output it to file, Elasticsearch, Datadog, etc

docker-compose.fluent.yml

version: "3.7"

services:
  fluentbit:
    image: fluent/fluent-bit

    deploy:
      mode: global      # one container per host

    ports:
      - target: 24224
        published: 24224
        protocol: tcp   # fluent-bit only supports TCP for forward input
        mode: host      # use host mode, no need to go through ingress routing mesh

    volumes:
      - /mnt/path:/log  # persist logs to an NAS drive

    configs:            # load config, so no need to build custom image
      - source: FLUENTBIT_CONFIG
        target: /fluent-bit/etc/fluent-bit.conf

configs:
  FLUENTBIT_CONFIG:
    external: true
    name: FLUENTBIT_CONFIG

docker-compose.nginx.yml

version: "3.7"

services:
  nginx:
    image: nginx

    ports:
      - 8080:80

    environment:
      FOO: foo
      BAR: bar

    labels:
      com.example.service: web

    logging:
      driver: fluentd
      options:
        fluentd-address: 'localhost:24224' # !IMPORTANT you need an endpoint accessible from the docker host, not inside the container
        fluentd-async-connect: 'true'      # async connection
        mode: 'non-blocking'               # non blocking
        tag: 'docker.{{.Name}}'            # the tag/name of a log entry
        env: 'FOO,BAR'                     # any env variable you want to add to the log
        labels: 'com.example.service'      # any label you want to add to the log

For fluentd-address, you need an endpoint accessible from the docker host, NOT inside the container

Example fluent-bit config:

[INPUT]
    Name              forward
    Listen            0.0.0.0
    Port              24224

[FILTER]
    Name              grep
    Match             *
    Exclude           log "IGNORE ME"

[OUTPUT]
    Name              stdout
    Match             *

[OUTPUT]
    Name              file
    Match             *
    Path              /log

[OUTPUT]
    Name              datadog
    Match             *
    Host              http-intake.logs.datadoghq.com
    TLS               on
    compress          gzip
    apikey            <apikey>
    dd_source         nginx
    dd_message_key    log
    dd_tags           env:local

Network

Basic commands

# list networks
docker network ls

# create a subnet (looks like this will create a virtual network adapter on a Linux host, but not a Mac)
docker network create \
                --subnet 10.1.0.0/16 \
                --gateway 10.1.0.1 \
                --ip-range=10.1.0.0/28 \
                --driver=bridge \
                bridge04

# start a container with a network, specifying a static IP for it
docker run \
        -it \
        --name test \
        --net bridge04 \
        --ip 10.1.0.2 \
        ubuntu:xenial /bin/bash

# OR
# connect a running container to a network,
# you can specify an IP address, and a network scoped alias for the container
docker network connect \
                --ip 10.1.0.2 \
                --alias ACoolName \
                bridge04 <container_name>

# show network settings and containers in this network
docker network inspect bridge04

DNS

  • By default, Docker passes the host's DNS config(/etc/resolv.conf) to a container;

  • You can specify DNS servers by

    • Adding command line option --dns
    # specify DNS servers
    docker run -d \
            --dns=8.8.8.8 \
            --dns=8.8.4.4 \
            --name testweb \
            -p 80:80 \
            httpd
    • Adding configs in /etc/docker/daemon.json (affects all containers);
    // in /etc/docker/daemon.json
    {
        ...
        "dns": ["8.8.8.8", "8.8.4.4"]
        ...
    }

Swarm

After docker swarm init, Docker will create an overlay network called ingress and a bridge network called docker_gwbridge on every node.

docker network ls
# NETWORK ID          NAME                DRIVER              SCOPE
# ...
# hzmie3wc2krb        ingress             overlay             swarm
# a08d8933c9cf        docker_gwbridge     bridge              local

# show nodes in the ingress network
docker network inspect ingress

You can create your own overlay network:

# create another overlay network, this network will be available to all nodes
docker network create \
                --driver=overlay \
                --subnet 192.168.1.0/24 \
                overlay0

# start a service using the above overlay network
docker service create \
                --name testweb \
                -p 80:80 \
                --network=overlay0 \
                --replicas 3 \
                httpd

# start another service using the same overlay network
docker service create \
                --name myservice \
                --network=overlay0 \
                --replicas 3 \
                <image>

# inspect the network, it will show containers in this network
#   including a 'overlay0-endpoint' container, serves as a load balancer
docker network inspect overlay0
  • An overlay network will be available to all nodes in a swarm;
  • If the above service testweb runs on node1.example.com and node2.example.com, you can access it from either host;
  • Any service in an overlay network can connect to other services in the same network using the service name, docker handles the DNS resolution, so in above example, in a myservice container, you can ping testweb, any request to testweb is load balanced by the virtual endpoint container overlay0-endpoint;

Network Driver Types

  • bridge

    • default on stand-alone Docker hosts;
      • The default bridge network is docker0 on the host, which has config:
        {
            "Subnet": "172.17.0.0/16",
            "Gateway": "172.17.0.1"
        }
        
      • The host is the gateway, has ip 172.17.0.1;
    • all containers on this host will use this network by default, getting ips within 172.17.0.0/16;
    • external access is granted by port exposure of the container's services and accessed by the host;
  • none

    • when absolutely no networking is needed;
    • can only be accessed on the host;
    • can docker attach <container-id> or docker exec -it <container-id>;
  • gateway bridge

    • automatically created when initing or joining a swarm;
    • special bridge network that allows overlay networks access to an individual Docker daemon's physical network;
    • all service containers running on a node is in this network;
    • Not a Docker device, it exists in the kernel of the Docker host, you can see it with ifconfig on the host;
  • overlay

    • it is a 'swarm' scope driver: it extends itself to all daemons in the swarm (building on workers if needed);
    • Swarm services connected to the same overlay network effectively expose all ports to each other. For a port to be accessible outside of the service, that port must be published using the -p or --publish flag on docker service create;

ingress

  • A Special overlay network that load balances network traffic amongst swarm working nodes;
  • Every working node gets a ingress-endpoint container;
  • If a service exposes any ports, then its containers are in this network;
  • Maintains a list of all IP addresses from nodes of a service, when a request comes in, routes to one of them;
  • Provides 'routing mesh', allows services to be exposed to the external network without having a replica running on every node in the Swarm;
  • When you start a Swarm service and do not connect it do a user-defined overlay network, it connects to ingress by default;
  • You can customize the subnet ip range, MTU, etc;

Swarm networking

  • When init/join a swarm, docker_gwbridge and ingress networks are created on each node, and there is a virtual ingress-endpoint container, which is part of both networks;
  • When creating a service web, its containers are attached to both the docker_gwbridge and ingress network;
  • When deploying a stack xStack, which have two services s1 (2 replicas) and s2 (1 replica), all three containers are in the ingress network (because they publish ports), and the docker_gwbridge network of respective owning host;
    • There is an additional overlay network xStack_default, which is non-ingress;
    • xStack_default handles DNS resolution, services are accessible by name s1 and s2, so in xStack_s1.1 you can ping s2;
    • Ingress network doesn't handle DNS resolution, so in web.1, you can't ping s2;
  • Let's say web has a port binding 9000:80, then when you visit 192.168.0.1:9000, thru the docker_gwbridge network, it reaches ingress-endpoint, which keeps record of all ips of the web service, thru the ingress network, it routes the request to either web.1 (10.0.0.6) or web.2 (10.0.0.5);

Port Publishing Mode

  • Host

    • mode=host in deployment;

    • you can only have at most ONE container on each host;

    • used in single host environment or in environment where you need complete control over routing;

    • ports for containers are only available on the underlying host system and are NOT avaliable for services which don't have a replica on this host;

    • in docker-compose.yml :

      ports:
        - target: 80
          published: 8080
          protocol: tcp
          mode: host      # specify mode here
  • Ingress

    • provides 'routing mesh', makes all published ports available on all hosts, so that service is accessible from every node regardless whether there is a replica running on it or not;

    • in docker-compose.yml :

      ports:
        - target: 80
          published: 8080
          protocol: tcp
          mode: ingress   # specify mode here

Endpoint mode

  • vip

    This is the default mode:

    docker service create \
                    --name myWeb \
                    --network myOverlay \
                    --endpoint-mode vip \
                    --replicas 2 \
                    nginx

    When you query the Docker internal DNS (127.0.0.11:53), you get one virtual IP of the service, it's the IP of an endpoint, not any specific container.

  • dnsrr

    If you have your own load balancer, you can bypass the routing mesh, by specifying the endpoint-mode to dnsrr

    docker service create \
                    --name testweb \
                    --endpoint-mode dnsrr \
                    --replicas 2 \
                    nginx

    When you query the Docker internal DNS server 127.0.0.11:53, it gives you a list of each container's IP address.

Docker Compose

A tool for defining and running multi-container Docker applications. Officially supported by Docker.

Steps for using Compose:

  1. Define your app's environment with a Dockerfile;
  2. Define the services that make up your app in docker-compose.yml so they can be run together (you may need to run docker-compose build as well);
  3. Run docker-compose up to start your app;
# rebuild and updating a container
docker-compose up -d --build

# the same as
docker-compose build

docker-compose up -d

# make sure no cached images are used and all intermediate images are removed
#  use this when you updated package.json, see '--renew-anon-volumes' below as well
docker-compose build --force-rm --no-cache

## specicy a project name
docker-compose up -p myproject

# you can specify multiple config files, this allows you extending base configs in different environments
docker-compose -f docker-compose.yml -f docker-compose.prod.yml up -d

docker-compose.yml

  • docker-compose up|run use this file to create containers;

  • docker stack deploy use this file to deploy stacks to a swarm as well (the old way is to use docker service create, adding all options on the command line);

  • Options specified in the Dockerfile, such as CMD, EXPOSE, VOLUME, ENV are respected;

  • Network and volume definitions are analogous to docker network create and docker volume create;

  • Options for docker-compose up|run only:

    • build: options applied at build time, if image is specified, it will be used as the name of the built image;
  • Options for docker stack deploy only:

    • deploy: config the deployment and running of services;

volumes

version: '2'

services:
  app:
    build:
    #...

    volumes:
      - .:/app # mount current directory to the container at /app
      - /app/node_modules # for "node_modules", use the existing one in the image, don't mount from the host

    #...

in the above example,

  • the mounted volumes will override any existing files in the image, current directory . is mounted to /app, and will override existing /app in the image;
  • but the image's /app/node_modules is preserved, not mounted from the host machine;

see details here: Lessons from Building a Node App in Docker

There is a problem with this config

see here: "docker-compose up" not rebuilding container that has an underlying updated image

  • after you update package.json on your local, and run docker-compose up --build, the underlying images do get updated, because Docker Compose is using an old anonymous volume for /app/node_modules from the old container, so the new package you installed is absent from the new container;
  • add a --renew-anon-volumes flag to docker-compose up --build will solve this issue;

deploy

  • restart_policy
    • condition
      • none - never restart containers;
      • on-failure - when container exited with error;
      • any - always restart container, even when the host rebooted;
    • max_attempts
    • delay
    • window

env_file

# api.env
NODE_ENV=test
version: '3'
services:
  api:
    image: 'node:6-alpine'

    env_file:
     - ./api.env

    environment:
     - NODE_ENV=production
     - APP_VERSION          # get this value from shell env

This allows you provide a set of environment variables to the container, the precedence order of env variables:

  1. Compose file;
  2. Shell environment variable;
  3. env_file;
  4. Dockerfile;

In the above example, inside the container, NODE_ENV will be 'production', and APP_VERSION will be whatever value in the shell when you start the container;

Variable substitution

db:
  image: 'postgres:${POSTGRES_VERSION}'
  extra_hosts:
    sql: ${SQL}
# .env

POSTGRES_VERSION=10.2
SQL=1.2.3.4

Variables in a compose file get their value from either the running shell, or .env file.

Please note:

  • Values in .env are used for variable substitution automatically, but they don't get set in the container's environment if you don't specify it with env_file in the compose file;
  • In later versions of docker-compose, there is a new CLI option --env-file, which allows you to specify another file instead of .env, it's not the same as the env_file option in compose file;
  • .env file doesn't work with docker stack deploy;

Using placeholders

Create services using templates - Docker doc

Some docker service create flags (thus corresponding compose file fields) support placeholders: --hostname, --mount, --env

Placeholder Description
{{.Service.ID}} Service ID
{{.Service.Name}} Service name
{{.Service.Labels}} Service labels
{{.Node.ID}} Node ID
{{.Node.Hostname}} Node Hostname
{{.Task.ID}} Task ID
{{.Task.Name}} Task name
{{.Task.Slot}} Task slot

Example:

docker service create --name myWeb --hostname '{{.Node.Hostname}}' nginx

version: '3.4'

services:
  test:
    image: 'node'

    environment:
      # each container gets its unique env var
      - myTask='{{.Service.Name}} - {{.Node.Hostname}}'

    deploy:
      replicas: 3

    volumes:
      - logs:/logs/

volumes:
  logs:
    # this makes each task/container get its own volume
    # /var/lib/docker/volumes/mystack_test_taskId
    name: '{{.Service.Name}}_{{.Task.ID}}'

Multiple compose files

https://docs.docker.com/compose/extends/#multiple-compose-files

You can deploy a stack using multiple compose files, so there is a base compose file, and each env can have its own compose file containing its special settings.

docker-compose.base.yml

version: '3.6'

services:
  nginx:
    image: 'nginx'

    deploy:
      replicas: 2

docker-compose.prod.yml

version: '3.6'

services:
  nginx:
    ports:
      - 80:80

    deploy:
      replicas: 4

Deploy a Swarm stack using multiple compose files:

docker stack deploy \
              -c docker-compose.base.yml \
              -c docker-compose.prod.yml \
              garystack

Networking

By default:

  • Compose sets up a single network, every service is reachable by other services, using the service name as the hostname;

  • In this example

    # /path/to/myapp/docker-compose.yml
    
    version: '3'
    services:
      web:
        build: .
        ports:
          - '8000:8000'
        links:
          - 'db:database'
      db:
        image: mysql
        ports:
          - '8001:3061'
    • The network will be called myapp_default;
    • web can connect to the db thru db:3061;
    • Host can access the db thru <docker_ip>:8001;
    • The links directive defines an alias, so db can be accessed by database as well, it is not required;

Custom networks

version: '3'
services:
  proxy:
    build: ./proxy
    networks:
      - frontend
  app:
    build: ./app
    networks:
      - frontend
      - backend
  db:
    image: postgres
    networks:
      - backend

networks:
  frontend:
    # Use a custom driver
    driver: custom-driver-1
  backend:
    # Use a custom driver which takes special options
    driver: custom-driver-2
    driver_opts:
      foo: '1'
      bar: '2'
  • Define custom networks by top-level networks directive;
  • Each service can specify which networks to join;
  • In the example above, proxy and db are isolated to each other, app can connect to both;

See https://docs.docker.com/compose/networking/ for configuring the default network and connecting containers to external networks;

Name collision issue

for the following example:

# /path/to/MyProject/docker-compose.yml
version: '2'

services:
  app:
    build:
      #...
    #...

when you run

docker-compose up

it will create a container named MyProject_app_1, if you got another docker compose file in the same folder (or another folder with the same name), and the service is called app as well, the container name will collide, you need to specify a --project-name option:

docker-compose --project-name <anotherName> up

see Proposal: make project-name persistent

Docker machine

Docker Machine is a tool that lets you install Docker Engine on virtual/remote hosts, and manage the hosts with docker-machine commands.

Docker machine

# create a machine named 'default', using virtualbox as the driver
docker-machine create --driver virtualbox default

# list docker machines
docker-machine ls
# NAME      ACTIVE   DRIVER       STATE     URL                         SWARM   DOCKER        ERRORS
# default   -        virtualbox   Running   tcp://192.168.99.100:2376           v18.06.1-ce

# one way to talk to a machine: run a command thru ssh on a machine
docker-machine ssh <machine-name> "docker images"

# another way to talk to a machine: this set some 'DOCKER_' env variables, which make the 'docker' command talk to the specified machine
eval "$(docker-machine env <machine-name>)"

# get ip address
docker-machine ip

# stop and start machines
docker-machine stop <machine-name>
docker-machine start <machine-name>

# unset 'DOCKER_' envs
eval $(docker-machine env -u)

Swarm mode

Swarm-architecture

  • A swarm consists of multiple Docker hosts which run in swarm mode and act as managers or/and workers;
  • Advantage over standalone containers: You can modify a service's configuration without manually restart the service;
  • You can run one or more nodes on a single physical computer, in production, nodes are typically distributed over multiple machines;
  • A Docker host can be a manager, a worker or both;
  • You can run both swarm services and standalone containers on the same Docker host;
# init a swarm
docker swarm init

# show join tokens
docker swarm join-token [worker|manager]

# join a swarm as a node (worker or manager), you can join from any machine
docker swarm join

# show nodes in a swarm (run on a manager node)
docker node ls

# leave the swarm
docker swarm leave

Swarm on a single node

version: '3'
services:
  web:
    image: garylirocks/get-started:part2
    deploy:
      replicas: 3 # run 3 instance
      resources:
        limits:
          cpus: '0.1'
          memory: 50M
      restart_policy:
        condition: on-failure
    ports:
      - '4000:80'
    networks:
      - webnet
networks:
  webnet: # this is a load-balanced overlay network
  • service: A service only runs one image, but it specifies the way that image runs -- what ports it should use, how many replicas of the container should run, etc;
  • task: A single container running in a service is called a task, a service contains multiple tasks;
# init a swarm
docker swarm init

# start the service, the last argument is the app/stack name
docker stack deploy -c docker-compose.yml getstartedlab
# creates a network named 'getstartedlab_webnet'
# creates a service named 'getstartedlab_web'

# list stacks/apps
docker stack ls

# list all services
docker service ls

# list tasks for this service
docker service ps getstartedlab_web
# ID                  NAME                 IMAGE                           NODE                    DESIRED STATE       CURRENT STATE           ERROR               PORTS
# o4u5rpngt6lq        getstartedlab_web.1   garylirocks/get-started:part2   linuxkit-025000000001   Running             Running 4 minutes ago
# oqaep03q6gkf        getstartedlab_web.2   garylirocks/get-started:part2   linuxkit-025000000001   Running             Running 4 minutes ago
# tebeg1r7mb9o        getstartedlab_web.3   garylirocks/get-started:part2   linuxkit-025000000001   Running             Running 4 minutes ago

# show containers
docker ps   # container ids and names are different from task ids and names
# CONTAINER ID        IMAGE                           COMMAND                  CREATED             STATUS              PORTS                                     NAMES
# fb1ae6433344        garylirocks/get-started:part2   "python app.py"          8 minutes ago       Up 8 minutes        80/tcp                                    getstartedlab_web.1.o4u5rpngt6lqmv44io3k269tn
# 8a1b8a50ea52        garylirocks/get-started:part2   "python app.py"          8 minutes ago       Up 8 minutes        80/tcp                                    getstartedlab_web.2.oqaep03q6gkfy3rv09vvqk2ul
# e2523c31d341        garylirocks/get-started:part2   "python app.py"          8 minutes ago       Up 8 minutes        80/tcp                                    getstartedlab_web.3.tebeg1r7mb9odm2lf9mlx217e

# scale the app: update the replicas value in the compose file, then deploy again, no need to manually stop anything
docker stack deploy -c docker-compose.yml getstartedlab

# take down the app
docker stack rm getstartedlab

# take down the swarm
docker swarm leave --force
  • Docker Swarm keeps history of each task, so docker service ps <service> will list both running and shutdown services, you can add a filter option to only show running tasks: docker service ps -f "DESIRED-STATE=Running" <service>;
  • Or you can use docker swarm update --task-history-limit <int> to update the task history limit;

Multi-nodes swarm example

Swarm services diagram

# create docker machines
docker-machine create --driver virtualbox myvm1
docker-machine create --driver virtualbox myvm2

# list the machines, NOTE: 2377 is the swarm management port, 2376 is the Docker daemon port
docker-machine ls

# init a swarm on myvm1, it becomes a manager
docker-machine ssh myvm1 "docker swarm init --advertise-addr <myvm1 ip>"

# let myvm2 join as a worker to the swarm
docker-machine ssh myvm2 "docker swarm join --token <token> <ip>:2377"

# list all the nodes in the swarm
docker-machine ssh myvm1 "docker node ls"
# ID                            HOSTNAME            STATUS              AVAILABILITY        MANAGER STATUS      ENGINE VERSION
# skcuugxvjltou1dvhzgogprs4 *   myvm1               Ready               Active              Leader              18.06.1-ce
# t57kref0g1zye30qrpabsexkk     myvm2               Ready               Active                                  18.06.1-ce

# connect to myvm1, so you can use your local `docker-compose.yml` to deploy an app without copying it
eval $(docker-machine env myvm1)

# deploy the app on the swarm
docker stack deploy -c docker-compose.yml getstartedlab

# list stacks
docker-demo docker stack ls
# NAME                SERVICES            ORCHESTRATOR
# getstartedlab       1                   Swarm

# list services
docker-demo docker service ls
# ID                  NAME                MODE                REPLICAS            IMAGE                           PORTS
# s6978kvj671c        getstartedlab_web   replicated          3/3                 garylirocks/get-started:part2   *:4000->80/tcp

# show tasks
docker-demo docker service ps getstartedlab_web
# ID                  NAME                  IMAGE                           NODE                DESIRED STATE       CURRENT STATE           ERROR               PORTS
# bt422r4gsp3p        getstartedlab_web.1   garylirocks/get-started:part2   myvm2               Running             Running 4 minutes ago
# z6q4wzex8x4z        getstartedlab_web.2   garylirocks/get-started:part2   myvm1               Running             Running 4 minutes ago
# 3805vovw1ioq        getstartedlab_web.3   garylirocks/get-started:part2   myvm2               Running             Running 4 minutes ago

# now, you can visit the app by 192.168.99.100:4000 or 192.168.99.101:4000, it's load-balanced, meaning one node may redirect a request to another node

# you can also: update the app, then rebuild and push the image;
#               or, update docker-compose.yml and deploy again;

# tear down the stack
docker stack rm getstartedlab

Swarm ingress routing

Multi-service stacks

Add visualizer and redis service to the stack,

  • visualizer doesn't depend on anything, but it should be run on a manager node;
  • redis need data persistence, we put it on the manager node, and add volume mapping as well;
version: '3'
services:
  web:
    # replace username/repo:tag with your name and image details
    image: username/repo:tag
    deploy:
      replicas: 5
      restart_policy:
        condition: on-failure
      resources:
        limits:
          cpus: '0.1'
          memory: 50M
    ports:
      - '80:80'
    networks:
      - webnet

  visualizer:
    image: dockersamples/visualizer:stable
    ports:
      - '8080:8080'
    volumes:
      - '/var/run/docker.sock:/var/run/docker.sock'
    deploy:
      placement:
        constraints: [node.role == manager]
    networks:
      - webnet

  redis:
    image: redis
    ports:
      - '6379:6379'
    volumes:
      - '/home/docker/data:/data'
    deploy:
      placement:
        constraints: [node.role == manager]
    command: redis-server --appendonly yes
    networks:
      - webnet

networks:
  webnet:
# add the data folder on the manager node
docker-machine ssh myvm1 "mkdir ./data"

# deploy again
docker stack deploy -c docker-compose.yml getstartedlab

# list services
docker service ls
# ID                  NAME                       MODE                REPLICAS            IMAGE                             PORTS
# t3g55qxamxnv        getstartedlab_redis        replicated          1/1                 redis:latest                      *:6379->6379/tcp
# 6h3c994a1evq        getstartedlab_visualizer   replicated          1/1                 dockersamples/visualizer:stable   *:8080->8080/tcp
# xzqj0epf49eq        getstartedlab_web          replicated          3/3                 garylirocks/get-started:part2     *:4000->80/tcp

Service placement

Constraints can be added in the compose file to put a service on specific nodes

...

    deploy:
      replicas: 1
      placement:
        constraints:
          - "node.role==manager"
          - "node.labels.security==high"

...

Add labels to a node using:

docker node update --label-add security=high <node-id>

Configs

A good usecase for config: use the same nginx image, load different nginx.conf to it, so you don't need to build an image for each config.

  • Store non-sensitive info (e.g. config files) outside image or running containers;
  • Don't need to bind-mount;
  • Added or removed from a service at any time, and services can share a config;
  • Config values can be generic strings or binary content (up to 500KB);
  • Only available to swarm services, not standalone containers;
  • Configs are managed by swarm managers, when a service been granted access to a config, the config is mounted as a file in the container. (/<config-name>), you can set uid, pid and mode for a config;

Basic usage using docker config commands

# create a config
echo "This is a config" | docker config create my-config -

# create a service and grant it access to the config
docker service create --name redis --config my-config redis:alpine

# inspect the config file in the container
docker container exec $(docker ps --filter name=redis -q) ls -l /my-config
# -r--r--r--    1 root     root            12 Jun  5 20:49 my-config

docker container exec $(docker ps --filter name=redis -q) cat /my-config
# This is a config

# update a service, removing access to the config
docker service update --config-rm my-config redis

# remove a config
docker config rm my-config

Use for Nginx config

You have already got two secret files: site.key, site.crt and a config file site.conf:

server {
    listen                443 ssl;
    server_name           localhost;
    ssl_certificate       /run/secrets/site.crt;
    ssl_certificate_key   /run/secrets/site.key;

    location / {
        root   /usr/share/nginx/html;
        index  index.html index.htm;
    }
}
# create secrets and config
docker secret create site.key site.key
docker secret create site.crt site.crt
docker config create site.conf site.conf

# create a service using the secrets and config
docker service create \
     --name nginx \
     --secret site.key \
     --secret site.crt \
     --config source=site.conf,target=/etc/nginx/conf.d/site.conf,mode=0440 \
     --publish published=3000,target=443 \
     nginx:latest \
     sh -c "exec nginx -g 'daemon off;'"

in the running container, the following three files now exist:

  • /run/secrets/site.key
  • /run/secrets/site.crt
  • /etc/nginx/conf.d/site.conf

Rotate a config

Update site.conf:

# create a new config using the updated file
docker config create site-v2.conf site.conf

# update the service, removing old config, adding new one
docker service update \
  --config-rm site.conf \
  --config-add source=site-v2.conf,target=/etc/nginx/conf.d/site.conf,mode=0440 \
  nginx

# remove old config from the swarm
docker config rm site.conf

Usage in Compose file

Example

version: '3.3'

services:
  redis:
    image: redis:latest

    configs:
      # short syntax
      - my_config
      - my_other_config

      # long syntax
      - source: my_config
        target: /redis_config
        uid: '103'
        gid: '103'
        mode: 0440


configs:
  my_config:
    file: ./my_config.txt
  my_other_config:
    external: true
  yet_another_config:
    external: true

To put Docker configs under version control, since you can't update an existing config, you need to give it a new name for every deployment:

version: '3.6'

services:
  nginx:
    image: nginx

    configs:
      - source: x
        target: /x.txt

configs:
  x:
    name: x.$BUILD_ID   # build id in CI
    file: ./x.txt

On remote server:

BUILD_ID=12 docker stack deploy -c './docker-compose.yml' mystack
# Creating network mystack_default
# Creating config x.12
# Creating service mystack_nginx

deploy again

BUILD_ID=13 docker stack deploy -c './docker-compose.yml' mystack
# Creating config x.13
# Updating service mystack_nginx (id: nb4xk465kl1vxgxp4jq7r1djl)

These configs are scoped to the stack, when you remove the stack, they are deleted too:

docker config ls
# ID                          NAME                CREATED             UPDATED
# olih9b2a594bxryn7hz50ephg   x.12                9 minutes ago       9 minutes ago
# 3lkh69r8g704mqwmobqnyw41e   x.13                8 seconds ago       8 seconds ago

docker stack rm mystack
# Removing service mystack_nginx
# Removing config x.13
# Removing config x.12
# Removing network mystack_default

Secrets

Sensitive data a container needs at runtime, should not be stored in the image or in source control:

  • Usernames and passwords;
  • TLS certificates and keys;
  • SSH keys;
  • Name of a database or internal server;
  • Generic strings or binary content (up to 500kb);

Usage:

  • Secret is encrypted in transition and at rest, it's replicated across all managers;

  • Decrypted secret is mounted into the container in an in-memory filesystem, the mount point defaults to /run/secrets/<scret_name>;

  • Management commands:

    • docker secret create;
    • docker secret inspect;
    • docker secret ls;
    • docker secret rm;
    • --secret flag for docker service create;
    • --secret-add and --secret-rm flags for docker service update;

Secrets are persistent, they still exists after you restart docker daemon

Example: Use secrets with a WordPress service

the mysql and wordpress image has been created in a way that you can pass in environment variable for the password directly (MYSQL_PASSWORD) or for a secret file (MYSQL_PASSWORD_FILE).

# generate a random string as a secret 'mysql_password'
openssl rand -base64 20 | docker secret create mysql_password -

# root password, not shared with Wordpress service
openssl rand -base64 20 | docker secret create mysql_root_password -

# create a custom network, MySQL service doesn't need to be exposed
docker network create -d overlay mysql_private

# create a MySQL service using the above secrets
docker service create \
     --name mysql \
     --replicas 1 \
     --network mysql_private \
     --mount type=volume,source=mydata,destination=/var/lib/mysql \
     --secret source=mysql_root_password,target=mysql_root_password \
     --secret source=mysql_password,target=mysql_password \
     -e MYSQL_ROOT_PASSWORD_FILE="/run/secrets/mysql_root_password" \
     -e MYSQL_PASSWORD_FILE="/run/secrets/mysql_password" \
     -e MYSQL_USER="wordpress" \
     -e MYSQL_DATABASE="wordpress" \
     mysql:latest

# create a Wordpress service
docker service create \
     --name wordpress \
     --replicas 1 \
     --network mysql_private \
     --publish published=30000,target=80 \
     --mount type=volume,source=wpdata,destination=/var/www/html \
     --secret source=mysql_password,target=wp_db_password,mode=0400 \
     -e WORDPRESS_DB_USER="wordpress" \
     -e WORDPRESS_DB_PASSWORD_FILE="/run/secrets/wp_db_password" \
     -e WORDPRESS_DB_HOST="mysql:3306" \
     -e WORDPRESS_DB_NAME="wordpress" \
     wordpress:latest

# verify the services are running
docker service ls

Rotate a secret

Here we rotate the password of the wordpress user, not the root password:

# create a new password and store it as a secret
openssl rand -base64 20 | docker secret create mysql_password_v2 -

# remove old secret and mount it again under a new name, add the new password secret, which is still needed for actually updating the password in MySQL
docker service update \
     --secret-rm mysql_password mysql
docker service update \
     --secret-add source=mysql_password,target=old_mysql_password \
     --secret-add source=mysql_password_v2,target=mysql_password \
     mysql

# update MySQL password using the `mysqladmin` CLI
docker container exec $(docker ps --filter name=mysql -q) \
    bash -c 'mysqladmin --user=wordpress --password="$(< /run/secrets/old_mysql_password)" password "$(< /run/secrets/mysql_password)"'

# update WP service, this triggers a rolling restart of the WP service and make it use the new secret
docker service update \
     --secret-rm mysql_password \
     --secret-add source=mysql_password_v2,target=wp_db_password,mode=0400 \
     wordpress

# remove old secret
docker service update \
     --secret-rm mysql_password \
     mysql
docker secret rm mysql_password

Example compose file

version: '3.1'

services:
  db:
    image: mysql:latest
    volumes:
      - db_data:/var/lib/mysql
    environment:
      MYSQL_ROOT_PASSWORD_FILE: /run/secrets/db_root_password
      MYSQL_DATABASE: wordpress
      MYSQL_USER: wordpress
      MYSQL_PASSWORD_FILE: /run/secrets/db_password
    secrets:
      - db_root_password
      - db_password

  wordpress:
    depends_on:
      - db
    image: wordpress:latest
    ports:
      - '8000:80'
    environment:
      WORDPRESS_DB_HOST: db:3306
      WORDPRESS_DB_USER: wordpress
      WORDPRESS_DB_PASSWORD_FILE: /run/secrets/db_password
    secrets:
      - db_password

secrets:
  db_password:
    file: db_password.txt
  db_root_password:
    file: db_root_password.txt

volumes:
  db_data:

The above compose file would create secret <stack_name>_db_password in the swarm.

From Compose File v3.5,

  • if you want to use a secret already exist in the swarm, set external: true,
  • and it allows name-mapping, in the following example, the secret is named redis_secret in the swarm, and my_second_secret within the stack, this can be leveraged for secret rotation
version "3.5"

...

secrets:
  my_first_secret:
    external: true
  my_second_secret:
    external: true
    name: redis_secret

Node.js in Docker

Docker and Node.js Best Practices

  • Use init

    Node.js was not designed to run as PID 1, for example, it will not respond to SIGINT and similar signals, you should use the --init flag to wrap your Node.js process

    Using Docker run:

    docker run -it -init node

    Using docker-compose:

    version: "3.8"
    services:
      web:
        image: alpine:latest
        init: true
    
  • Use the non-priviledged user node

  • CMD

    Instead of using npm start, use node directly. This reduces the number of processes, and causes exit signals such as SIGTERM and SIGINT to be received by the Node.js process instead of npm

    CMD ["node","index.js"]

Tips / Best Practices

  • On Mac, you can talk to a container through port binding, but you may NOT be able to ping the container's IP address;

  • Don't put apt-get update on a different line than apt-get install, the result of the apt-get update will get cached and won't run every time, the following is a good example of how this should be done:

    # From https://github.com/docker-library/golang
    RUN apt-get update && \
        apt-get install -y --no-install-recommends \
        g++ \
        gcc \
        libc6-dev \
        make \
        && rm -rf /var/lib/apt/lists/*
    
  • To utilize Docker's caching capability better, install dependencies first before copying over everything, this makes sure other changes don't trigger a rebuild (e.g. non package.json changes don't trigger node package downloads)

    COPY ./my-app/package.json /home/app/package.json   # copy over dependency config first
    WORKDIR /home/app/
    RUN npm install                 # result get cached here
    
    COPY ./my-app/ /home/app/       # copy over other stuff
    
  • Serve current folder using NginX

    docker run -v "$(pwd):/usr/share/nginx/html" -p 4000:80 nginx