Fluentd
0.12
0.12
  • Introduction
  • Overview
    • Getting Started
    • Installation
    • Life of a Fluentd event
    • Support
    • FAQ
  • Use Cases
    • Centralized App Logging
    • Monitoring Service Logs
    • Data Analytics
    • Connecting to Data Storages
    • Stream Processing
    • Windows Event Collection
    • IoT Data Logger
  • Configuration
    • Config File Syntax
    • Routing Examples
    • Recipes
  • Deployment
    • Logging
    • Monitoring
    • Signals
    • RPC
    • High Availability Config
    • Failure Scenarios
    • Performance Tuning
    • Plugin Management
    • Trouble Shooting
    • Secure Forwarding
    • Fluentd UI
    • Command Line Option
  • Container Deployment
    • Docker Image
    • Docker Logging Driver
    • Docker Compose
    • Kubernetes
  • Input Plugins
    • tail
    • forward
    • secure_forward
    • udp
    • tcp
    • http
    • unix
    • syslog
    • exec
    • scribe
    • multiprocess
    • dummy
    • Others
  • Output Plugins
    • file
    • s3
    • kafka
    • forward
    • secure_forward
    • exec
    • exec_filter
    • copy
    • geoip
    • roundrobin
    • stdout
    • null
    • webhdfs
    • splunk
    • mongo
    • mongo_replset
    • relabel
    • rewrite_tag_filter
    • Others
  • Buffer Plugins
    • memory
    • file
  • Filter Plugins
    • record_transformer
    • grep
    • parser
    • stdout
  • Parser Plugins
    • regexp
    • apache2
    • apache_error
    • nginx
    • syslog
    • ltsv
    • csv
    • tsv
    • json
    • multiline
    • none
  • Formatter Plugins
    • out_file
    • json
    • ltsv
    • csv
    • msgpack
    • hash
    • single_value
  • Developer
    • Plugin Development
    • Community
    • Mailing List
    • Source Code
    • Bug Tracking
    • ChangeLog
    • Logo
  • Articles
    • Store Apache Logs into MongoDB
    • Apache To Riak
    • Store Apache Logs into Amazon S3
    • Before Install
    • Cep Norikra
    • Collect Glusterfs Logs
    • Common Log Formats
    • Docker Logging Efk Compose
    • Docker Logging
    • Filter Modify Apache
    • Forwarding Over Ssl
    • Free Alternative To Splunk By Fluentd
    • Data Collection to Hadoop (HDFS)
    • Data Analytics with Treasure Data
    • Install By Chef
    • Install By Deb
    • Install By Dmg
    • Install By Gem
    • Install By Rpm
    • Install From Source
    • Install On Beanstalk
    • Install On Heroku
    • Java
    • Kinesis Stream
    • Kubernetes Fluentd
    • Monitoring by Prometheus
    • Monitoring by Rest Api
    • Nodejs
    • Performance Tuning Multi Process
    • Performance Tuning Single Process
    • Perl
    • Php
    • Python
    • Quickstart
    • Raspberrypi Cloud Data Logger
    • Recipe Apache Logs To Elasticsearch
    • Recipe Apache Logs To Mongo
    • Recipe Apache Logs To S3
    • Recipe Apache Logs To Treasure Data
    • Recipe Cloudstack To Mongodb
    • Recipe Csv To Elasticsearch
    • Recipe Csv To Mongo
    • Recipe Csv To S3
    • Recipe Csv To Treasure Data
    • Recipe Http Rest Api To Elasticsearch
    • Recipe Http Rest Api To Mongo
    • Recipe Http Rest Api To S3
    • Recipe Http Rest Api To Treasure Data
    • Recipe Json To Elasticsearch
    • Recipe Json To Mongo
    • Recipe Json To S3
    • Recipe Json To Treasure Data
    • Recipe Nginx To Elasticsearch
    • Recipe Nginx To Mongo
    • Recipe Nginx To S3
    • Recipe Nginx To Treasure Data
    • Recipe Syslog To Elasticsearch
    • Recipe Syslog To Mongo
    • Recipe Syslog To S3
    • Recipe Syslog To Treasure Data
    • Recipe Tsv To Elasticsearch
    • Recipe Tsv To Mongo
    • Recipe Tsv To S3
    • Recipe Tsv To Treasure Data
    • Ruby
    • Scala
    • Splunk Like Grep And Alert Email
Powered by GitBook
On this page
  • Check top command
  • Avoid extra computations
  • Use num_threads parameter
  • Use external gzip command for S3/TD
  • Reduce memory usage
  • Multi-process plugin

Was this helpful?

  1. Articles

Performance Tuning Single Process

PreviousPerformance Tuning Multi ProcessNextPerl

Last updated 5 years ago

Was this helpful?

This article describes how to optimize Fluentd's performance within single process. If your traffic is up to 5,000 messages/sec, the following techniques should be enough.

With more traffic, Fluentd tends to be more CPU bound. In such case, please also visit to utilize multiple CPU cores.

Check top command

If Fluentd doesn't perform as well as you had expected, please check the top command first. You need to identify which part of your system is the bottleneck (CPU? Memory? Disk I/O? etc).

Avoid extra computations

This is more like a general recommendation, but it's always better NOT TO HAVE extra computation inside Fluentd. Fluentd is flexible to do quite a bit internally, but adding too much logic to Fluentd's configuration file makes it difficult to read and maintain, while making it also less robust. The configuration file should be as simple as possible.

Use num_threads parameter

If the destination for your logs is a remote storage or service, adding a num_threads option will parallelize your outputs (the default is 1). Using multiple threads can hide the IO/network latency. This parameter is available for all output plugins.

<match test>
  @type output_plugin
  num_threads 8
  ...
</match>

The important point is this option doesn't improve the processing performance, e.g. numerical computation, mutating record, etc.

Use external gzip command for S3/TD

Ruby has GIL (Global Interpreter Lock), which allows only one thread to execute at a time. While I/O tasks can be multiplexed, CPU-intensive tasks will block other jobs. One of the CPU-intensive tasks in Fluentd is compression.

The new version of S3/Treasure Data plugin allows compression outside of the Fluentd process, using gzip. This frees up the Ruby interpreter while allowing Fluentd to process other tasks.

# S3
<match ...>
  @type s3
  store_as gzip_command
  num_threads 8
  ...
</match>

# Treasure Data
<match ...>
  @type tdlog
  use_gzip_command
  num_threads 8
  ...
</match>

While not a perfect solution to leverage multiple CPU cores, this can be effective for most Fluentd deployments. As before, you can run this with num_threads option as well.

Reduce memory usage

Do full GC when the number of old objects is more than R * N
  where R is this factor and
  N is the number of old objects just after last full GC.

So default GC behaviour doesn't call full GC until the number of old objects reaches 2.0 * before old objects. This improves the throughput but it grows the total memory usage. This setting is not good for low resource environment, e.g. small container. For such cases, try RUBY_GC_HEAP_OLDOBJECT_LIMIT_FACTOR=0.9 or RUBY_GC_HEAP_OLDOBJECT_LIMIT_FACTOR=1.2.

Multi-process plugin

The CPU is often the bottleneck for Fluentd instances that handle billions of incoming records. To utilize multiple CPU cores, we recommend using the in_multiprocess plugin.

Ruby has several GC parameters to tune GC performance and you can configure these parameters via environment variable(). To reduce memory usage, set RUBY_GC_HEAP_OLDOBJECT_LIMIT_FACTOR to lower value. RUBY_GC_HEAP_OLDOBJECT_LIMIT_FACTOR is used for full GC trigger and the default is 2.0. Quote from documentation.

See and article for more detail.

If this article is incorrect or outdated, or omits critical information, please . is a open source project under . All components are available under the Apache 2 License.

Performance Tuning (Multi-Process)
Parameter list is here
Ruby 2.1 Garbage Collection: ready for production
Watching and Understanding the Ruby 2.1 Garbage Collector at Work
Performance Tuning (Multi Process)
let us know
Fluentd
Cloud Native Computing Foundation (CNCF)