1. Overview
1.概述
In this tutorial, we’ll explain how to access Spring Boot logs in Docker, from local development to sustainable multi-container solutions.
在本教程中,我们将解释如何在Docker中访问Spring Boot日志,从本地开发到可持续的多容器解决方案。
2. Basic Console Output
2.基本控制台输出
To begin with, let’s build our Spring Boot Docker image from our previous article:
首先,让我们从前一篇文章中建立我们的Spring Boot Docker镜像。
$> mvn spring-boot:build-image
Then, when we run our container, we can immediately see STDOUT logs in the console:
然后,当我们运行我们的容器时,我们可以立即在控制台看到STDOUT日志。
$> docker run --name=demo-container docker.io/library/spring-boot-docker:0.0.1-SNAPSHOT
Setting Active Processor Count to 1
WARNING: Container memory limit unset. Configuring JVM for 1G container.
This command follows the logs like Linux shell tail -f command.
这个命令像Linux shell的tail -f命令一样跟踪日志。
Now, let’s configure our Spring Boot application with a log file appender by adding a line to the application.properties file:
现在,让我们通过在application.properties文件中添加一行来为我们的Spring Boot应用程序配置一个日志文件appender。
logging.file.path=logs
Then, we can obtain the same result by running the tail -f command in our running container:
然后,我们可以通过在运行的容器中运行tail -f命令来获得相同的结果。
$> docker exec -it demo-container tail -f /workspace/logs/spring.log > $HOME/spring.log
Setting Active Processor Count to 1
WARNING: Container memory limit unset. Configuring JVM for 1G container.
That’s it for single-container solutions. In the next chapters, we’ll learn how to analyze log history and log output from composed containers.
单个容器的解决方案就这样了。在接下来的章节中,我们将学习如何分析日志历史和组成容器的日志输出。
3. Docker Volume for Log Files
3.日志文件的Docker卷
If we must access log files from the host filesystem, we have to create a Docker volume.
如果我们必须从主机文件系统访问日志文件,我们必须创建一个Docker卷。
To do this, we can run our application container with the command:
要做到这一点,我们可以用命令运行我们的应用程序容器。
$> mvn spring-boot:build-image -v /path-to-host:/workspace/logs
Then, we can see the spring.log file in the /path-to-host directory.
然后,我们可以看到spring.log文件在/path-to-host目录。
Starting with our previous article on Docker Compose, we can run multiple containers from a Docker Compose file.
从我们的前一篇关于Docker Compose的文章开始,我们可以从一个Docker Compose文件中运行多个容器。
If we’re using a Docker Compose file, we should add the volumes configuration:
如果我们使用的是Docker Compose文件,我们应该添加卷的配置。
network-example-service-available-to-host-on-port-1337:
image: karthequian/helloworld:latest
container_name: network-example-service-available-to-host-on-port-1337
volumes:
- /path-to-host:/workspace/logs
Then, let’s run the article Compose file:
然后,让我们运行文章中的Compose文件。
$> docker-compose up
The log files are available in the /path-to-host directory.
日志文件可在/path-to-host目录中找到。
Now that we’ve reviewed the basic solutions, let’s explore the more advanced docker logs command.
现在我们已经回顾了基本的解决方案,让我们探索更高级的docker logs命令。
In the following chapters, we assume that our Spring Boot application is configured to print logs to STDOUT.
在下面的章节中,我们假设我们的Spring Boot应用程序被配置为向STDOUT打印日志。
4. Docker Logs for Multiple Containers
4.多个容器的Docker日志
As soon as we run multiple containers at once, we’ll no longer be able to read mixed logs from multiple containers.
只要我们同时运行多个容器,我们就不能再从多个容器中读取混合日志。
We can find in the Docker Compose documentation that containers are set up by default with the json-file log driver, which supports the docker logs command.
我们可以在Docker Compose文档中发现,容器默认设置了json-file日志驱动,它支持docker logs命令。
Let’s see how it works with our Docker Compose example.
让我们看看它是如何通过我们的Docker Compose示例工作的。
First, let’s find our container id:
首先,让我们找到我们的容器ID。
$> docker ps
CONTAINER ID IMAGE COMMAND
877bb028a143 karthequian/helloworld:latest "/runner.sh nginx"
Then, we can display our container logs with the docker logs -f command. We can see that, despite the json-file driver, the output is still plain text — JSON is only used internally by Docker:
然后,我们可以用docker logs -f命令显示我们的容器日志。我们可以看到,尽管有json-file驱动,输出仍然是纯文本–JSON只在Docker内部使用。
$> docker logs -f 877bb028a143
172.27.0.1 - - [22/Oct/2020:11:19:52 +0000] "GET / HTTP/1.1" 200 4369 "
172.27.0.1 - - [22/Oct/2020:11:19:52 +0000] "GET / HTTP/1.1" 200 4369 "
The -f option behaves like the tail -f shell command: it echoes the log output as it’s produced.
-f选项的行为类似于tail -f shell命令:它对产生的日志输出进行回声。
Note that if we’re running our containers in Swarm mode, we should use the docker service ps and docker service logs commands instead.
注意,如果我们在Swarm模式下运行我们的容器,我们应该使用docker service ps和docker service logs命令代替。
In the documentation, we can see that the docker logs command supports limited output options: json-file, local, or journald.
在文档中,我们可以看到,docker logs命令支持有限的输出选项。json-file, local, 或 journald。
5. Docker Drivers for Log Aggregation Services
5.日志聚合服务的Docker驱动程序
The docker logs command is especially useful for instant watching: it doesn’t provide complex filters or long-term statistics.
docker logs命令对即时观察特别有用:它不提供复杂的过滤器或长期统计。
For that purpose, Docker supports several log aggregation service drivers. As we studied Graylog in a previous article, we’ll configure the appropriate driver for this platform.
为此,Docker支持几个日志聚合服务驱动。由于我们在前一篇文章中研究了Graylog,我们将为这个平台配置合适的驱动程序。
This configuration can be global for the host in the daemon.json file. It’s located in /etc/docker on Linux hosts or C:\ProgramData\docker\config on Windows servers.
该配置可以在daemon.json文件中为主机做全局配置。它位于Linux主机上的/etc/docker或Windows服务器上的C:\ProgramData\docker\config。
Note that we should create the daemon.json file if it doesn’t exist:
注意,如果daemon.json文件不存在,我们应该创建它。
{
"log-driver": "gelf",
"log-opts": {
"gelf-address": "udp://1.2.3.4:12201"
}
}
The Graylog driver is called GELF — we simply specify the IP address of our Graylog instance.
Graylog驱动程序被称为GELF – 我们只需指定Graylog实例的IP地址。
We can also override this configuration when running a single container:
我们也可以在运行单个容器时覆盖这一配置。
$> docker run \
--log-driver gelf –-log-opt gelf-address=udp://1.2.3.4:12201 \
alpine echo hello world
6. Conclusion
6.结论
In this article, we’ve reviewed different ways to access Spring Boot logs in Docker.
在这篇文章中,我们回顾了在Docker中访问Spring Boot日志的不同方法。
Logging to STDOUT makes log watching quite easy from a single-container execution.
将日志记录到STDOUT使得从单一容器的执行中观察日志变得相当容易。
However, using file appenders isn’t the best option if we want to benefit from the Docker logging features, as containers don’t have the same constraints as proper servers.
然而,如果我们想从Docker日志功能中获益,使用文件应用者并不是最好的选择,因为容器没有像适当的服务器那样的限制。