1. Overview
1.概述
When using Docker extensively, the management of several different containers quickly becomes cumbersome.
当广泛使用Docker时,几个不同的容器的管理很快就会变得很麻烦。
Docker Compose is a tool that helps us overcome this problem and easily handle multiple containers at once.
Docker Compose是一个帮助我们克服这个问题的工具,一次轻松处理多个容器。
In this tutorial, we’ll examine its main features and powerful mechanisms.
在本教程中,我们将研究其主要特点和强大的机制。
2. The YAML Configuration Explained
2.YAML配置详解
In short, Docker Compose works by applying many rules declared within a single docker-compose.yml configuration file.
简而言之,Docker Compose通过应用在单个docker-compose.yml配置文件中声明的许多规则来工作。
These YAML rules, both human-readable and machine-optimized, provide an effective way to snapshot the whole project from ten-thousand feet in a few lines.
这些YAML规则,既是人类可读的,也是机器优化的,提供了一种有效的方法,可以在几行中从万丈高空快照整个项目。
Almost every rule replaces a specific Docker command, so that in the end, we just need to run:
几乎每个规则都取代了一个特定的Docker命令,所以最后我们只需要运行。
docker-compose up
We can get dozens of configurations applied by Compose under the hood. This will save us the hassle of scripting them with Bash or something else.
我们可以在引擎盖下获得几十个由 Compose 应用的配置。这将使我们省去用Bash或其他东西编写脚本的麻烦。
In this file, we need to specify the version of the Compose file format, at least one service, and optionally volumes and networks:
在这个文件中,我们需要指定Compose文件格式的版本,至少一个服务,以及可选的卷和网络。
version: "3.7"
services:
...
volumes:
...
networks:
...
Let’s see what these elements actually are.
让我们看看这些元素究竟是什么。
2.1. Services
2.1.服务
First of all, services refer to the containers’ configuration.
首先,services指的是容器的配置。
For example, let’s take a dockerized web application consisting of a front end, a back end, and a database. We’d likely split these components into three images, and define them as three different services in the configuration:
例如,让我们来看看一个由前端、后端和数据库组成的docker化网络应用。我们可能会把这些组件分成三个镜像,并在配置中把它们定义为三个不同的服务。
services:
frontend:
image: my-vue-app
...
backend:
image: my-springboot-app
...
db:
image: postgres
...
There are multiple settings that we can apply to services, and we’ll explore them in more detail later on.
有多种设置,我们可以应用于服务,稍后我们将更详细地探讨它们。
2.2. Volumes & Networks
2.2.体积和网络
Volumes, on the other hand, are physical areas of disk space shared between the host and a container, or even between containers. In other words, a volume is a shared directory in the host, visible from some or all containers.
另一方面,卷是主机和容器之间,甚至是容器之间共享的磁盘空间的物理区域。换句话说,卷是主机中的一个共享目录,从一些或所有的容器中可见。
Similarly, networks define the communication rules between containers, and between a container and the host. Common network zones will make the containers’ services discoverable by each other, while private zones will segregate them in virtual sandboxes.
同样地,网络定义了容器之间以及容器和主机之间的通信规则。公共网络区将使容器的服务可以被对方发现,而私人网络区将把它们隔离在虚拟沙盒中。
Again, we’ll learn more about them in the next section.
同样,我们将在下一节中进一步了解它们。
3. Dissecting a Service
3.剖析一项服务
Now let’s begin to inspect the main settings of a service.
现在让我们开始检查一个服务的主要设置。
3.1. Pulling an Image
3.1.拉动一个图像
Sometimes, the image we need for our service has already been published (by us or by others) in Docker Hub, or another Docker Registry.
有时,我们的服务所需的镜像已经在Docker Hub,或其他Docker注册中心发布(由我们或其他人发布)。
If that’s the case, then we refer to it with the image attribute by specifying the image name and tag:
如果是这样,那么我们通过指定图像名称和标签,用image属性引用它。
services:
my-service:
image: ubuntu:latest
...
3.2. Building an Image
3.2.建立一个图像
Alternatively, we might need to build an image from the source code by reading its Dockerfile.
另外,我们可能需要通过读取图片的Dockerfile来从源代码中构建图片。
This time, we’ll use the build keyword, passing the path to the Dockerfile as the value:
这一次,我们将使用build关键字,将Dockerfile的路径作为值传递。
services:
my-custom-app:
build: /path/to/dockerfile/
...
We can also use a URL instead of a path:
我们也可以使用一个URL而不是路径。
services:
my-custom-app:
build: https://github.com/my-company/my-project.git
...
Additionally, we can specify an image name in conjunction with the build attribute, which will name the image once created, making it available for use by other services:
此外,我们可以结合build属性指定一个image名称,一旦创建了图像,它将被命名,使它可被其他服务使用。
services:
my-custom-app:
build: https://github.com/my-company/my-project.git
image: my-project-image
...
3.3. Configuring the Networking
3.3.配置网络
Docker containers communicate between themselves in networks created, implicitly or through configuration, by Docker Compose. A service can communicate with another service on the same network by simply referencing it by the container name and port (for example network-example-service:80), provided that we’ve made the port accessible through the expose keyword:
Docker容器之间通过Docker Compose创建的网络(隐含或通过配置)进行通信。只要我们通过expose关键字使端口可被访问,一个服务就可以通过容器名称和端口(例如network-example-service:80)来与同一网络上的另一个服务通信。
services:
network-example-service:
image: karthequian/helloworld:latest
expose:
- "80"
In this case, it would also work without exposing it because the expose directive is already in the image Dockerfile.
在这种情况下,不暴露也能工作,因为expose指令已经在镜像Dockerfile中。
To reach a container from the host, the ports must be exposed declaratively through the ports keyword, which also allows us to choose if we’re exposing the port differently in the host:
要从主机上到达一个容器,必须通过ports关键字声明性地暴露端口,这也允许我们选择是否在主机中以不同方式暴露端口。
services:
network-example-service:
image: karthequian/helloworld:latest
ports:
- "80:80"
...
my-custom-app:
image: myapp:latest
ports:
- "8080:3000"
...
my-custom-app-replica:
image: myapp:latest
ports:
- "8081:3000"
...
Port 80 will now be visible from the host, while port 3000 of the other two containers will be available on ports 8080 and 8081 in the host. This powerful mechanism allows us to run different containers exposing the same ports without collisions.
现在,80端口将从主机上可见,而另外两个容器的3000端口将在主机的8080和8081端口上可用。这种强大的机制使我们能够运行不同的容器,暴露在相同的端口上而不发生碰撞。
Finally, we can define additional virtual networks to segregate our containers:
最后,我们可以定义额外的虚拟网络来隔离我们的容器。
services:
network-example-service:
image: karthequian/helloworld:latest
networks:
- my-shared-network
...
another-service-in-the-same-network:
image: alpine:latest
networks:
- my-shared-network
...
another-service-in-its-own-network:
image: alpine:latest
networks:
- my-private-network
...
networks:
my-shared-network: {}
my-private-network: {}
In this last example, we can see that another-service-in-the-same-network will be able to ping and reach port 80 of network-example-service, while another-service-in-its-own-network won’t.
在最后一个例子中,我们可以看到同一网络中的另一个服务将能够ping并到达网络-示例服务的80端口,而它自己网络中的另一个服务则不能。
3.4. Setting Up the Volumes
3.4.设置卷册
There are three types of volumes: anonymous, named, and host.
有三种类型的卷。匿名,命名,和主机。
Docker manages both anonymous and named volumes, automatically mounting them in self-generated directories in the host. While anonymous volumes were useful with older versions of Docker (pre 1.9), named ones are now the suggested way to go. Host volumes also allow us to specify an existing folder in the host.
Docker同时管理匿名卷和命名卷,自动将它们挂载到主机中自行生成的目录下。虽然匿名卷在旧版本的Docker(1.9之前)中很有用,但命名卷现在是建议的方式。主机卷也允许我们在主机中指定一个现有的文件夹。
We can configure host volumes at the service level, and named volumes in the outer level of the configuration, in order to make the latter visible to other containers, rather than only to the one they belong:
我们可以在服务层配置主机卷,在配置的外层配置命名卷,以便使后者对其他容器可见,而不是只对它们所属的容器可见。
services:
volumes-example-service:
image: alpine:latest
volumes:
- my-named-global-volume:/my-volumes/named-global-volume
- /tmp:/my-volumes/host-volume
- /home:/my-volumes/readonly-host-volume:ro
...
another-volumes-example-service:
image: alpine:latest
volumes:
- my-named-global-volume:/another-path/the-same-named-global-volume
...
volumes:
my-named-global-volume:
Here, both containers will have read/write access to the my-named-global-volume shared folder, regardless of which path they’ve mapped it to. Instead, the two host volumes will be available only to volumes-example-service.
在这里,两个容器将对my-named-global-volume共享文件夹有读/写访问权,不管他们把它映射到哪个路径。相反,这两个主机卷将只对volumes-example-service可用。
The /tmp folder of the host’s file system is mapped to the /my-volumes/host-volume folder of the container. This portion of the file system is writeable, which means that the container can read and also write (and delete) files in the host machine.
主机文件系统的/tmp文件夹被映射到容器的/my-volumes/host-volume文件夹。文件系统的这一部分是可写的,这意味着容器可以读取也可以写入(和删除)主机中的文件。
We can mount a volume in read-only mode by appending :ro to the rule, like for the /home folder (we don’t want a Docker container erasing our users by mistake).
我们可以通过在规则中添加:ro来以只读模式挂载一个卷,比如/home文件夹(我们不希望Docker容器误删我们的用户)。
3.5. Declaring the Dependencies
3.5.声明依赖关系
Often, we need to create a dependency chain between our services so that some services get loaded before (and unloaded after) other ones. We can achieve this result through the depends_on keyword:
通常,我们需要在我们的服务之间创建一个依赖链,以便一些服务在其他服务之前(和之后)被加载。我们可以通过depends_on关键字实现这一结果。
services:
kafka:
image: wurstmeister/kafka:2.11-0.11.0.3
depends_on:
- zookeeper
...
zookeeper:
image: wurstmeister/zookeeper
...
We should be aware, however, that Compose won’t wait for the zookeeper service to finish loading before starting the kafka service; it’ll simply wait for it to start. If we need a service to be fully loaded before starting another service, we need to get deeper control of the startup and shutdown order in Compose.
然而,我们应该注意到,在启动kafka服务之前,Compose不会等待zookeeper服务完成加载;它只会等待它启动。如果我们需要在启动另一个服务之前完全加载一个服务,我们需要在 Compose 中获得更深入的启动和关闭顺序的控制。
4. Managing Environment Variables
4.管理环境变量
Working with environment variables is easy in Compose. We can define static environment variables, as well as dynamic variables, with the ${} notation:
在 Compose 中使用环境变量是很容易的。我们可以用${}符号来定义静态环境变量,以及动态变量。
services:
database:
image: "postgres:${POSTGRES_VERSION}"
environment:
DB: mydb
USER: "${USER}"
There are different methods to provide those values to Compose.
有不同的方法来提供这些值给 Compose。
For example, one method is setting them in a .env file in the same directory, structured like a .properties file, key=value:
例如,一种方法是在同一目录下的.env文件中设置,结构类似.properties文件,key=value。
POSTGRES_VERSION=alpine
USER=foo
Otherwise, we can set them in the OS before calling the command:
否则,我们可以在调用命令之前在操作系统中设置它们。
export POSTGRES_VERSION=alpine
export USER=foo
docker-compose up
Finally, we might find it easy to use a simple one-liner in the shell:
最后,我们可能会发现在shell中使用一个简单的单行字很容易。
POSTGRES_VERSION=alpine USER=foo docker-compose up
We can mix the approaches, but let’s keep in mind that Compose uses the following priority order, overwriting the less important with the higher priorities:
我们可以混合使用这些方法,但我们要记住,Compose使用以下的优先级顺序,用较高的优先级覆盖不太重要的部分。
- Compose file
- Shell environment variables
- Environment file
- Dockerfile
- Variable not defined
5. Scaling & Replicas
5.缩放和复制
In older Compose versions, we were allowed to scale the instances of a container through the docker-compose scale command. Newer versions deprecated it, and replaced it with the ––scale option.
在旧的Compose版本中,我们可以通过docker-compose scale命令来扩展一个容器的实例。新的版本取消了它,并以––scale选项取代它。
We can exploit Docker Swarm, a cluster of Docker Engines, and autoscale our containers declaratively through the replicas attribute of the deploy section:
我们可以利用Docker Swarm这个Docker引擎集群,并通过deploy部分的replicas属性声明性地自动扩展我们的容器。
services:
worker:
image: dockersamples/examplevotingapp_worker
networks:
- frontend
- backend
deploy:
mode: replicated
replicas: 6
resources:
limits:
cpus: '0.50'
memory: 50M
reservations:
cpus: '0.25'
memory: 20M
...
Under deploy, we can also specify many other options, like the resources thresholds. Compose, however, considers the whole deploy section only when deploying to Swarm, and ignores it otherwise.
在deploy下,我们还可以指定许多其他选项,如资源阈值。然而,Compose只有在部署到Swarm时才会考虑整个deploy部分,否则会忽略它。
6. A Real-World Example: Spring Cloud Data Flow
6.一个现实世界的例子 Spring Cloud数据流
While small experiments help us understand the single gears, seeing the real-world code in action will definitely unveil the big picture.
虽然小的实验有助于我们了解单一的齿轮,但看到真实世界的代码在运行,肯定会揭开大幕。
Spring Cloud Data Flow is a complex project, but simple enough to be understandable. Let’s download its YAML file and run:
Spring Cloud Data Flow是一个复杂的项目,但简单到足以让人理解。让我们下载其YAML文件并运行。
DATAFLOW_VERSION=2.1.0.RELEASE SKIPPER_VERSION=2.0.2.RELEASE docker-compose up
Compose will download, configure, and start every component, and then intersect the container’s logs into a single flow in the current terminal.
Compose将下载、配置和启动每一个组件,然后将容器的日志截取成当前终端的一个流。
It’ll also apply unique colors to each one of them for a great user experience:
它还会给每个人应用独特的颜色,以获得良好的用户体验。
We might get the following error running a brand new Docker Compose installation:
我们在运行一个全新的Docker Compose安装时可能会遇到以下错误。
lookup registry-1.docker.io: no such host
While there are different solutions to this common pitfall, using 8.8.8.8 as DNS is probably the simplest.
虽然有不同的解决方案来解决这个常见的隐患,但使用8.8.8.8作为DNS可能是最简单的。
7. Lifecycle Management
7.生命周期管理
Now let’s take a closer look at the syntax of Docker Compose:
现在让我们仔细看看Docker Compose的语法。
docker-compose [-f <arg>...] [options] [COMMAND] [ARGS...]
While there are many options and commands available, we need to at least know the ones to activate and deactivate the whole system correctly.
虽然有许多可用的选项和命令,但我们至少需要知道那些选项和命令才能正确激活和停用整个系统。
7.1. Startup
7.1.启动
We’ve seen that we can create and start the containers, the networks, and the volumes defined in the configuration with up:
我们已经看到,我们可以用up创建并启动容器、网络和配置中定义的卷。
docker-compose up
After the first time, however, we can simply use start to start the services:
然而,第一次之后,我们可以简单地使用start来启动服务。
docker-compose start
If our file has a different name than the default one (docker-compose.yml), we can exploit the -f and ––file flags to specify an alternate file name:
如果我们的文件有一个不同于默认的名字(docker-compose.yml),我们可以利用-f和––file标志来指定一个替代的文件名。
docker-compose -f custom-compose-file.yml start
Compose can also run in the background as a daemon when launched with the -d option:
当使用-d选项启动时,Compose也可以作为一个守护程序在后台运行。
docker-compose up -d
7.2. Shutdown
7.2.关机
To safely stop the active services, we can use stop, which will preserve containers, volumes, and networks, along with every modification made to them:
为了安全地停止活动服务,我们可以使用stop,这将保留容器、卷和网络,以及对它们所做的每一个修改。
docker-compose stop
To reset the status of our project, we can simply run down, which will destroy everything with the exception of external volumes:
要重置我们项目的状态,我们可以简单地运行down,这将销毁除外部卷之外的一切。
docker-compose down
8. Conclusion
8.结论
In this article, we learned about Docker Compose and how it works.
在这篇文章中,我们了解了Docker Compose以及它是如何工作的。
As usual, we can find the source docker-compose.yml file on GitHub, along with a helpful battery of tests immediately available in the following image:
像往常一样,我们可以在GitHub上找到源代码docker-compose.yml文件,同时还可以在下面的图片中立即找到有用的测试电池。