Docker Logging Driver
Last updated
Was this helpful?
Last updated
Was this helpful?
The following article describes how to implement an unified logging system for your containers. Any production application requires to register certain events or problems during runtime.
The old fashion way is to write these messages to a log file, but that inherits certain problems specifically when we try to perform some analysis over the registers, or in the other side, if the application have multiple instances running, the scenario becomes even more complex.
On Docker v1.6, the concept of was introduced, which means that the Docker engine is aware about output interfaces that manage the application messages.
Currently, fluentd logging driver doesn't support sub-second precision.
A basic understanding of Docker
Docker v1.8+
The first step is to prepare Fluentd to listen for the messsages that will receive from the Docker containers, for a demonstration purposes we will instruct Fluentd to write the messages to the standard output; In a later step you will find how to accomplish the same aggregating the logs into a MongoDB instance.
Create a simple file called in_docker.conf which contains the following entries:
With this simple command start an instance of Fluentd:
If the service started you should see an output like this:
By default, the Fluentd logging driver will try to find a local Fluentd instance (step #2) listening for connections on the TCP port 24224, note that the container will not start if it cannot connect to the Fluentd instance.
The following command will run a base Ubuntu container and print some messages to the standard output, note that we have launched the container specifying the Fluentd logging driver:
Now on the Fluentd output, you will see the incoming message from the container, e.g:
At this point you will notice something interesting, the incoming messages have a timestamp, are tagged with the container_id and contains general information from the source container along the message, everything in JSON format.
Original event:
Filtered event:
Original events:
Filtered events:
fluentd-address
tag
Specify an optional address for Fluentd, it allows to set the host and TCP port, e.g:
Additionally this option allows to specify some internal variables: {{.ID}}, {{.FullID}} or {{.Name}}. e.g:
In a more real-world use case, you would want to use something other than the Fluentd standard output to store Docker containers messages, such as Elasticsearch, MongoDB, HDFS, S3, Google Cloud Storage and so on.
This document describes how to set up multi-container logging environment via EFK (Elasticsearch, Fluentd, Kibana) with Docker Compose.
In production environment, you must use one of the container orchestration tools. Currently, Kubernetes has better integration with Fluentd, and we're working on making better integrations with other tools as well.
For Docker v1.8, we have implemented a native , so now you are able to have an unified and structured logging system with the simplicity and high performance .
Using the Docker logging mechanism with is a straighforward step, to get started make sure you have the following prerequisites:
A basic understanding of
A basic understanding of
This article launches Fluentd as standard process, not a container. Please refer for fully containerized environment tutorial.
Application log is stored into "log"
field in the record. You can parse this log by using filter before send to destinations.
Application log is stored into "log"
field in the records. You can concatenate these logs by using filter before send to destinations.
If the logs are typical stacktraces, consider instead.
The support more options through the --log-opt Docker command line argument:
are a major requirement on Fluentd, they allows to identify the incoming data and take routing decisions. By default the Fluentd logging driver uses the container_id as a tag (64 character ID), you can change it value with the tag option as follows:
If this article is incorrect or outdated, or omits critical information, please . is a open source project under . All components are available under the Apache 2 License.