Fluentd
1.0
1.0
  • Introduction
  • Overview
    • Life of a Fluentd event
    • Support
    • FAQ
    • Logo
    • fluent-package v5 vs td-agent v4
  • Installation
    • Before Installation
    • Install fluent-package
      • RPM Package (Red Hat Linux)
      • DEB Package (Debian/Ubuntu)
      • .dmg Package (macOS)
      • .msi Installer (Windows)
    • Install calyptia-fluentd
      • RPM Package (Red Hat Linux)
      • DEB Package (Debian/Ubuntu)
      • .dmg Package (macOS)
      • .msi Installer (Windows)
    • Install by Ruby Gem
    • Install from Source
    • Post Installation Guide
    • Obsolete Installation
      • Treasure Agent v4 (EOL) Installation
        • Install by RPM Package v4 (Red Hat Linux)
        • Install by DEB Package v4 (Debian/Ubuntu)
        • Install by .dmg Package v4 (macOS)
        • Install by .msi Installer v4 (Windows)
      • Treasure Agent v3 (EOL) Installation
        • Install by RPM Package v3 (Red Hat Linux)
        • Install by DEB Package v3 (Debian/Ubuntu)
        • Install by .dmg Package v3 (macOS)
        • Install by .msi Installer v3 (Windows)
  • Configuration
    • Config File Syntax
    • Config File Syntax (YAML)
    • Routing Examples
    • Config: Common Parameters
    • Config: Parse Section
    • Config: Buffer Section
    • Config: Format Section
    • Config: Extract Section
    • Config: Inject Section
    • Config: Transport Section
    • Config: Storage Section
    • Config: Service Discovery Section
  • Deployment
    • System Configuration
    • Logging
    • Signals
    • RPC
    • High Availability Config
    • Performance Tuning
    • Multi Process Workers
    • Failure Scenarios
    • Plugin Management
    • Trouble Shooting
    • Fluentd UI
    • Linux Capability
    • Command Line Option
    • Source Only Mode
    • Zero-downtime restart
  • Container Deployment
    • Docker Image
    • Docker Logging Driver
    • Docker Compose
    • Kubernetes
  • Monitoring Fluentd
    • Overview
    • Monitoring by Prometheus
    • Monitoring by REST API
  • Input Plugins
    • tail
    • forward
    • udp
    • tcp
    • unix
    • http
    • syslog
    • exec
    • sample
    • monitor_agent
    • windows_eventlog
  • Output Plugins
    • file
    • forward
    • http
    • exec
    • exec_filter
    • secondary_file
    • copy
    • relabel
    • roundrobin
    • stdout
    • null
    • s3
    • kafka
    • elasticsearch
    • opensearch
    • mongo
    • mongo_replset
    • rewrite_tag_filter
    • webhdfs
    • buffer
  • Filter Plugins
    • record_transformer
    • grep
    • parser
    • geoip
    • stdout
  • Parser Plugins
    • regexp
    • apache2
    • apache_error
    • nginx
    • syslog
    • ltsv
    • csv
    • tsv
    • json
    • msgpack
    • multiline
    • none
  • Formatter Plugins
    • out_file
    • json
    • ltsv
    • csv
    • msgpack
    • hash
    • single_value
    • stdout
    • tsv
  • Buffer Plugins
    • memory
    • file
    • file_single
  • Storage Plugins
    • local
  • Service Discovery Plugins
    • static
    • file
    • srv
  • Metrics Plugins
    • local
  • How-to Guides
    • Stream Analytics with Materialize
    • Send Apache Logs to S3
    • Send Apache Logs to Minio
    • Send Apache Logs to Mongodb
    • Send Syslog Data to Graylog
    • Send Syslog Data to InfluxDB
    • Send Syslog Data to Sematext
    • Data Analytics with Treasure Data
    • Data Collection with Hadoop (HDFS)
    • Simple Stream Processing with Fluentd
    • Stream Processing with Norikra
    • Stream Processing with Kinesis
    • Free Alternative To Splunk
    • Email Alerting like Splunk
    • How to Parse Syslog Messages
    • Cloud Data Logging with Raspberry Pi
  • Language Bindings
    • Java
    • Ruby
    • Python
    • Perl
    • PHP
    • Nodejs
    • Scala
  • Plugin Development
    • How to Write Input Plugin
    • How to Write Base Plugin
    • How to Write Buffer Plugin
    • How to Write Filter Plugin
    • How to Write Formatter Plugin
    • How to Write Output Plugin
    • How to Write Parser Plugin
    • How to Write Storage Plugin
    • How to Write Service Discovery Plugin
    • How to Write Tests for Plugin
    • Configuration Parameter Types
    • Upgrade Plugin from v0.12
  • Plugin Helper API
    • Plugin Helper: Child Process
    • Plugin Helper: Compat Parameters
    • Plugin Helper: Event Emitter
    • Plugin Helper: Event Loop
    • Plugin Helper: Extract
    • Plugin Helper: Formatter
    • Plugin Helper: Inject
    • Plugin Helper: Parser
    • Plugin Helper: Record Accessor
    • Plugin Helper: Server
    • Plugin Helper: Socket
    • Plugin Helper: Storage
    • Plugin Helper: Thread
    • Plugin Helper: Timer
    • Plugin Helper: Http Server
    • Plugin Helper: Service Discovery
  • Troubleshooting Guide
  • Appendix
    • Update from v0.12 to v1
    • td-agent v2 vs v3 vs v4
Powered by GitBook
On this page
  • Prerequisites: Docker
  • Step 0: Create docker-compose.yml
  • Step 1: Create Fluentd Image with your Config + Plugin
  • Step 2: Start the Containers
  • Step 3: Generate httpd Access Logs
  • Step 4: Confirm Logs from Kibana
  • Learn More

Was this helpful?

  1. Container Deployment

Docker Compose

PreviousDocker Logging DriverNextKubernetes

Last updated 3 months ago

Was this helpful?

This article explains how to collect logs and propagate them to EFK (Elasticsearch + Fluentd + Kibana) stack. The example uses for setting up multiple containers.

had been an open-source search engine known for its ease of use. had been an open-source Web UI that makes Elasticsearch user-friendly for marketers, engineers and data scientists alike.

NOTE: Since v7.11, These products are distributed under non open-source license (Dual licensed under Server Side Public License and Elastic License)

By combining these three tools EFK (Elasticsearch + Fluentd + Kibana) we get a scalable, flexible, easy to use log collection and analytics pipeline. In this article, we will set up four (4) containers, each includes:

All the logs of httpd will be ingested into Elasticsearch + Kibana, via Fluentd.

Prerequisites: Docker

Please download and install Docker / Docker Compose. Well, that's it :)

Step 0: Create docker-compose.yml

With the YAML file below, you can create and start all the services (in this case, Apache, Fluentd, Elasticsearch, Kibana) by one command:

services:
  web:
    image: httpd
    ports:
      - "8080:80"
    depends_on:
      - fluentd
    logging:
      driver: "fluentd"
      options:
        fluentd-address: localhost:24224
        tag: httpd.access

  fluentd:
    build: ./fluentd
    volumes:
      - ./fluentd/conf:/fluentd/etc
    depends_on:
      # Launch fluentd after that elasticsearch is ready to connect
      elasticsearch:
        condition: service_healthy
    ports:
      - "24224:24224"
      - "24224:24224/udp"

  elasticsearch:
    image: docker.elastic.co/elasticsearch/elasticsearch:8.17.1
    container_name: elasticsearch
    hostname: elasticsearch
    environment:
      - discovery.type=single-node
      - xpack.security.enabled=false # Disable security for testing
    healthcheck:
      # Check whether service is ready
      test: ["CMD", "curl", "-f", "http://localhost:9200/_cluster/health"]
      interval: 10s
      retries: 5
      timeout: 5s
    ports:
      - 9200:9200

  kibana:
    image: docker.elastic.co/kibana/kibana:8.17.1
    depends_on:
      # Launch fluentd after that elasticsearch is ready to connect
      elasticsearch:
        condition: service_healthy
    ports:
      - "5601:5601"

Step 1: Create Fluentd Image with your Config + Plugin

# fluentd/Dockerfile

FROM fluent/fluentd:edge-debian
USER root
RUN ["gem", "install", "fluent-plugin-elasticsearch", "--no-document", "--version", "5.4.3"]
USER fluent
# fluentd/conf/fluent.conf

<source>
  @type forward
  port 24224
  bind 0.0.0.0
</source>

<match *.**>
  @type copy

  <store>
    @type elasticsearch
    host elasticsearch
    port 9200
    logstash_format true
    logstash_prefix fluentd
    logstash_dateformat %Y%m%d
    include_tag_key true
    type_name access_log
    tag_key @log_name
    flush_interval 1s
  </store>

  <store>
    @type stdout
  </store>
</match>

Step 2: Start the Containers

Let's start the containers:

$ docker compose up --detach

Use docker ps command to verify that the four (4) containers are up and running:

$ docker ps
CONTAINER ID   IMAGE                                                  COMMAND                   CREATED          STATUS                    PORTS                                                                                                    NAMES
7a489886d856   httpd                                                  "httpd-foreground"        36 seconds ago   Up 14 seconds             0.0.0.0:8080->80/tcp, [::]:8080->80/tcp                                                                  fluentd-elastic-kibana-web-1
36ded62da733   fluentd-elastic-kibana-fluentd                         "tini -- /bin/entryp…"    36 seconds ago   Up 15 seconds             5140/tcp, 0.0.0.0:24224->24224/tcp, 0.0.0.0:24224->24224/udp, :::24224->24224/tcp, :::24224->24224/udp   fluentd-elastic-kibana-fluentd-1
254b7692966f   docker.elastic.co/kibana/kibana:8.17.1                 "/bin/tini -- /usr/l…"    36 seconds ago   Up 15 seconds             0.0.0.0:5601->5601/tcp, :::5601->5601/tcp                                                                fluentd-elastic-kibana-kibana-1
187d3e5c2e08   docker.elastic.co/elasticsearch/elasticsearch:8.17.1   "/bin/tini -- /usr/l…"    37 seconds ago   Up 35 seconds (healthy)   0.0.0.0:9200->9200/tcp, :::9200->9200/tcp, 9300/tcp                                                      elasticsearch

Step 3: Generate httpd Access Logs

Use curl command to generate some access logs like this:

$ curl http://localhost:8080/
<html><body><h1>It works!</h1></body></html>

Step 4: Confirm Logs from Kibana

Then, go to Discover tab to check the logs. As you can see, logs are properly collected into the Elasticsearch + Kibana, via Fluentd.

Learn More

Create docker-compose.yml for . Docker Compose is a tool for defining and running multi-container Docker applications.

The logging section (check ) of web container specifies as a default container logging driver. All the logs from the web container will automatically be forwarded to host:port specified by fluentd-address.

Create fluentd/Dockerfile with the following content using the Fluentd ; and then, install the Elasticsearch plugin:

Then, create the Fluentd configuration file fluentd/conf/fluent.conf. The input plugin receives logs from the Docker logging driver and elasticsearch output plugin forwards these logs to Elasticsearch.

NOTE: The detail of used parameters for @type elasticsearch, see and furthermore.

Browse to and create data view.

Specify fluentd-* to Index pattern and click Save data view to Kibana.

If this article is incorrect or outdated, or omits critical information, please . is an open-source project under . All components are available under the Apache 2 License.

Apache HTTP Server
Fluentd
Elasticsearch
Kibana
Docker Installation
Docker Compose
Docker Compose documentation
Docker Fluentd Logging Driver
official Docker image
forward
Fluentd: Architecture
Fluentd: Get Started
Downloading Fluentd
let us know
Fluentd
Cloud Native Computing Foundation (CNCF)
Docker
Docker Compose
Elasticsearch
Kibana
fluent-plugin-elasticsearch
http://localhost:5601/app/discover#/
Elasticsearch parameters section
Kibana
Kibana Discover