Docker Logging: How Do Logs Work With Docker Containers?

Docker Logging: How Do Logs Work With Docker Containers?

Docker containers are a great way to create lightweight, portable, and self-contained application environments.

Logging is critical for every application since it gives valuable information for troubleshooting, evaluating performance issues, and drawing an overall picture of the behavior of your architecture.

This article presents a thorough tutorial covering all you need to know to start with Docker logging. It also provides some recommended practices for optimizing the logs of your containerized apps.

Contents

What Is a Docker Container?

A Docker container is a standard unit of software that wraps up code and all its dependencies so the program can be moved from one environment to another, quickly and reliably.

Containerized software, available for Linux and Windows-based applications, will always run the same way despite the infrastructure. 

Containers encapsulate software from its environment, ensuring that it performs consistently despite variations between environments — for example, development and staging.

Docker container technology was introduced as an open-source Docker Engine in 2013.

What Is a Docker Image?

A Docker image is a lightweight, standalone, executable software package that contains everything required to run an application: code, system tools, system libraries, and settings.

In other words, an image is a read-only template with instructions for constructing a container that can operate on the Docker platform. It provides an easy method to package up programs and preset server environments that you can use privately or openly with other Docker users.

The Importance of Logging Docker Containerized Applications

Logging is unquestionably one of the most critical considerations when developing containerized applications. Log management enables teams to debug and resolve issues more quickly, making it easier to discover problems, spot bugs, and ensure they don’t resurface.

Much has changed as the software stack has evolved from hardware-centric infrastructure to Dockerized microservices-based programs, but one constant has been the significance of logging.

You can’t get by with only a few fundamental metrics like availability, latency, and failures per second. These were appropriate for classic programs that operated on a single node and required little debugging. 

When using Docker, you must explore far and wide for root causes, and the time it takes to fix issues is essential to provide an excellent user experience.

A containerized application should be documented in logs and consolidated conveniently. A log analysis tool can then use the log messages to create a comprehensive picture of the events throughout your application.

Why Is Docker Logging Different from Traditional Logging? 

The main challenge with logging for containerized apps is that they create various log streams. These could be structured, unstructured, and plain text messages in multiple formats. 

The complexity of maintaining and analyzing container logs grows due to the constant log output in massive quantities. 

It is also difficult for development teams to identify, monitor, and map log events with the correct container generating them, making log processing incredibly sluggish and complicated.

Most standard log analysis tools do not function with containerized logging, and debugging becomes more challenging when compared to monolith applications that execute on one node.

Such complexities are due to the stateless containerized architecture that Docker containers provide, which has the following two characteristics:

1.  Docker Containers Are Impermanent

Docker containers are impermanent by design, meaning that a container can be stopped and destroyed. A new one can be constructed from the same image and deployed quickly with minimal setup and configuration.

The above also means that any logs saved are destroyed when the container is terminated.

To avoid your logs from disappearing, you need to use a log aggregator to collect them and save them in a location available in perpetuity. 

Keeping logs on the Docker host is risky since they might accumulate and consume the disk space. That is why it’s essential to save your logs in a centralized location or a data volume.

2. Docker Containers Are Multi-Level

In Docker logging, there are at least two levels of aggregation. The first relates to logs from your docker containers, while the second refers to host servers logs — system logs or Docker daemon logs.

These different levels raise the need for a specialized log aggregator with access to the host that retrieves application log files and accesses the file system inside the container to gather logs.

Docker Container Logs

What Are Container Logs?

Docker container logs, in a nutshell, are the console output of running containers. They specifically supply the stdout and stderr streams running within a container.

As previously stated, Docker logging is not the same as logging elsewhere. Everything that is written to the stdout and stderr streams in Docker is implicitly forwarded to a driver, allowing accessing and writing logs to a file.

Logs can also be viewed in the console. The Docker logs command displays information sent by a currently executing container. The docker service logs command displays information by all containers members of the service.

What Is a Docker Logging Driver?

The Docker logging drivers gather data from containers and make it accessible for analysis.

If no additional log-driver option is supplied when a container is launched, Docker will use the json-file driver by default. A few important notes on this:

  • Log-rotation is not performed by default. As a result, log files kept using the json-file logging driver can consume a significant amount of disk space for containers that produce a large output, potentially leading to disk space depletion.
  • Docker preserves the json-file logging driver — without log-rotation — as the default to maintain backward compatibility with older Docker versions and for instances when Docker is used as a Kubernetes runtime
  • The local driver is preferable because it automatically rotates logs and utilizes a more efficient file format.

Docker also includes logging drivers for sending logs to various services — for example, a logging service, a log shipper, or a log analysis platform. There are many different Docker logging drivers available. Some examples are listed below:

  • syslog — A long-standing and widely used standard for logging applications and infrastructure.
  • journald — A structured alternative to Syslog’s unstructured output.
  • fluentd — An open-source data collector for unified logging layer.
  • awslogs — AWS CloudWatch logging driver. If you host your apps on AWS, this is a fantastic choice.

You do, however, have several alternative logging driver options, which you can find in the Docker logging docs.

Docker also allows logging driver plugins, enabling you to write your Docker logging drivers and make them available over Docker Hub. At the same time, you can use any plugins accessible on Docker Hub.

Logging Driver Configuration

To configure a Docker logging driver as the default for all containers, you can set the value of the log-driver to the name of the logging driver in the daemon.json configuration file.

This example sets the default logging driver to the local driver:

{

  “log-driver”: “local”

}

Another option is configuring a driver on a container-by-container basis. When you initialize a container, you can use the –log-driver flag to specify a different logging driver than the Docker daemon’s default.

The code below starts an Alpine container with the local Docker logging driver:

docker run -it –log-driver local alpine ash

The docker info command will provide you with the current default logging driver for the Docker daemon.

Docker Logs With Remote Logging Drivers

Previously, the Docker logs command could only be used with logging drivers that supported containers utilizing the local, json-file, or journald logging drivers. However, many third-party Docker logging drivers did not enable reading logs from Docker logs locally.

When attempting to collect log data automatically and consistently, this caused a slew of issues. Log information could only be accessed and displayed in the format required by the third-party solution.

Starting with Docker Engine 20.10, you can use docker logs to read container logs independent of the logging driver or plugin that is enabled. 

Dual logging requires no configuration changes. Docker Engine 20.10 later allows double logging by default if the chosen Docker logging driver does not support reading logs.

Where Are Docker Logs Stored?

Docker keeps container logs in its default place, /var/lib/docker/. Each container has a log that is unique to its ID (the full ID, not the shorter one that is generally presented), and you may access it as follows:

/var/lib/docker/containers/ID/ID-json.log

What Are the Docker Logging Delivery Modes?

Docker logging delivery modes refer to how the container balances or prioritizes logging against other tasks. The available Docker logging delivery modes are blocking and non-blocking. Both the options can be applied regardless of what Docker logging driver you selected.

Blocking Mode

When in blocking mode, the program will be interrupted whenever a message needs to be delivered to the driver.

The advantage of the blocking mode is that all logs are forwarded to the logging driver, even though there may be a lag in your application’s performance. In this sense, this mode prioritizes logging against performance.

Depending on the Docker logging driver you choose, your application’s latency may vary. For example, the json-file driver, which writes to the local filesystem, produces logs rapidly and is unlikely to block or create a significant delay.

On the contrary, Docker logging drivers requiring the container to connect to a remote location may block it for extended periods, resulting in increased latency.

Docker’s default mode is blocking.

When to Use the Blocking Mode?

The json-file logging driver in blocking mode is recommended for most use situations. As mentioned before, the driver is quick since it writes to a local file. Therefore it’s generally safe to use it in a blocking way.

The blocking mode should also be used for memory-hungry programs requiring the bulk of the RAM available to your containers. The reason is that if the driver cannot deliver logs to its endpoint due to a problem such as a network issue, there may not be enough memory available for the buffer if it’s in non-blocking mode.

Non-Blocking

The non-blocking Docker logging delivery mode will not prevent the program from running to provide logs. Instead of waiting for logs to be sent to their destination, the container will store logs in a buffer in its memory.

Though the non-blocking Docker logging delivery mode appears to be the preferable option, it also introduces the possibility of some log entries being lost. Because the memory buffer in which the logs are saved has a limited capacity, it might fill up. 

Furthermore, if a container breaks, logs may be lost before being released from the buffer.

You may override Docker’s default blocking mode for new containers by adding an log-opts item to the daemon.json file. The max-buffer-size, which refers to the memory buffer capacity mentioned above, may also be changed from the 1 MB default.

{

        “log-driver”: “local”,

        “log-opts”: {

                “mode”: “non-blocking”

        }

}

Also, you can provide log-opts on a single container. The following example creates an Alpine container with non-blocking log output and a 4 MB buffer:

docker run -it –log-opt mode=non-blocking –log-opt max-buffer-size=4m alpine

When to Use Non-Blocking Mode?

Consider using the json-file driver in the non-blocking mode if your application has a big I/O demand and generates a significant number of logs. 

Because writing logs locally is rapid, the buffer is unlikely to fill quickly. If your program does not create spikes in logging, this configuration should handle all of your logs without interfering with performance.

For applications where performance is more a priority than logging but cannot use the local file system for logs — such as mission-critical applications — you can provide enough RAM for a reliable buffer and use the non-blocking mode. This setting should ensure the performance is not hampered by logging, yet the container should still handle most log data.

Docker Daemon Logs

What Are Daemon Logs?

The Docker platform generates and stores logs for its daemons. Depending on the host operating system, daemon logs are written to the system’s logging service or a log file.

If you only collected container logs, you would gain insight into the state of your services. On the other hand, you need to be informed of the state of your entire Docker platform, and the daemon logs exist for that reason as they provide an overview of your whole microservices architecture.

Assume a container shuts down unexpectedly. Because the container terminates before any log events can be captured, we cannot pinpoint the underlying cause using the docker logs command or an application-based logging framework. 

Instead, we may filter the daemon log for events that contain the container name or ID and sort by timestamp, which allows us to establish a chronology of the container’s life from its origin through its destruction.

The daemon log also contains helpful information about the host’s status. If the host kernel does not support a specific functionality or the host setup is suboptimal, the Docker daemon will note it during the initialization process.

Depending on the operating system settings and the Docker logging subsystem utilized, the logs may be kept in one of many locations. In Linux, you can look at the journalctl records:

sudo journalctl -xu docker.service

Analyzing Docker Logs

Log data must be evaluated before it can be used. When you analyze log data, you’re hunting for a needle in a haystack. 

You’re typically hunting for that one line with an error among thousands of lines of regular log entries. A solid analysis platform is required to determine the actual value of logs. Log collecting and analysis tools are critical. Here are some of the options.

Fluentd

Fluentd is a popular open-source solution for logging your complete stack, including non-Docker services. It’s a data collector that allows you to integrate data gathering and consumption for improved data utilization and comprehension.

ELK

ELK is the most widely used open-source log data analysis solution. It’s a set of tools: ElasticSearch for storing log data, Logstash for processing log data, and Kibana for displaying data via a graphical user interface. 

ELK is an excellent solution for Docker log analysis since it provides a solid platform maintained by a big developer community and is free.

Advanced Log Analysis Tools

With open-source alternatives, you must build up and manage your stack independently, which entails allocating the necessary resources and ensuring that your tools are highly accessible and housed on scalable infrastructure. It can necessitate a significant amount of IT resources as well.

That’s where more advanced log analysis platforms offer tremendous advantages. For example, tools like LogicMonitor’s SaaS platform for log intelligence and aggregation can give teams quick access to contextualized and connected logs and metrics in a single, unified cloud-based platform.

These sophisticated technologies leverage the power of machine learning to enable companies to reduce troubleshooting, streamline IT operations, and increase control while lowering risk.

Docker Logging Best Practices

Now that you understand how Docker logging works, you can make the best use of the logging drivers to tailor the best solution for a specific application. 

The following are additional best practices developers should consider for optimizing the Docker logging process.

Use Data Volumes

As mentioned before in the article, containers are temporary. If they fail to work correctly, all log data and files contained within the container are lost and cannot be recovered. 

Data volumes — designated folders within containers used to store persistent or commonly shared log events — must be used by developers to ensure the data within the container is secure. 

Data volumes make it easy to share data with other containers and reduce the likelihood of data loss.

This method involves creating a directory within your container that points to a directory on the host system where long-term or commonly-shared data will be stored regardless of what happens to your container. You can now make copies, backups, and access logs from other containers.

Use Application-Based Logging

Application-based Docker logging is advantageous for teams working in typical application contexts. It gives developers additional control over logging events. This method works by logging and analyzing data using the application’s framework. 

Application-based logging does not require the additional capability to transport logs to the host. Instead, developers can handle logging procedures through the application’s framework.

Have a Dedicated Logging Container

A dedicated logging container within a Docker container helps with log management. It collects, monitors, analyzes, and sends logs to a centralized place. 

Development teams may handle logs and quickly manage and scale log events. Log containers do not necessitate the installation of setup code to execute such activities.

This logging technique simplifies container migration between hosts and allows you to extend your Docker logging infrastructure by simply adding new Docker logging containers. Simultaneously, it will enable you to collect logs via several streams of log events, Docker API data, and stats.

Use the Sidecar Method

Using a sidecar is one of the most popular techniques for logging microservices systems for more extensive and complicated deployments.

It uses Docker logging containers the same way the dedicated container solution does. The difference this time is that each application container has its dedicated container, allowing you to tailor the logging solution for each program. 

The first container saves log files to a volume, and then the files are labeled and transmitted to a third-party log management system by the Docker logging container.

One of the primary benefits of using sidecars is adding additional custom tags to each log, making it easier to identify their origins.

Conclusion

While Docker containerization allows developers to encapsulate an application and its file system into a single portable package, it is far from maintenance-free. Docker logging is more complicated than conventional techniques. Thus, teams who use it should be aware of this.

Teams must get familiar with Docker logs to offer full-stack visibility, troubleshooting, performance enhancements, root cause analysis, and so on.

As we’ve seen in this post, Docker includes logging drivers and commands in the platform to simplify logging, as well as ways for getting performance data and plugins to interface with third-party logging tools. Various approaches and tactics might help you construct your Docker logging infrastructure to optimize your logging capabilities, but each has pros and downsides. It is recommended that you invest in comprehensive log analysis and container monitoring tools to acquire complete visibility into your microservices and containerized applications.