Fluentd
0.12
0.12
  • Introduction
  • Overview
    • Getting Started
    • Installation
    • Life of a Fluentd event
    • Support
    • FAQ
  • Use Cases
    • Centralized App Logging
    • Monitoring Service Logs
    • Data Analytics
    • Connecting to Data Storages
    • Stream Processing
    • Windows Event Collection
    • IoT Data Logger
  • Configuration
    • Config File Syntax
    • Routing Examples
    • Recipes
  • Deployment
    • Logging
    • Monitoring
    • Signals
    • RPC
    • High Availability Config
    • Failure Scenarios
    • Performance Tuning
    • Plugin Management
    • Trouble Shooting
    • Secure Forwarding
    • Fluentd UI
    • Command Line Option
  • Container Deployment
    • Docker Image
    • Docker Logging Driver
    • Docker Compose
    • Kubernetes
  • Input Plugins
    • tail
    • forward
    • secure_forward
    • udp
    • tcp
    • http
    • unix
    • syslog
    • exec
    • scribe
    • multiprocess
    • dummy
    • Others
  • Output Plugins
    • file
    • s3
    • kafka
    • forward
    • secure_forward
    • exec
    • exec_filter
    • copy
    • geoip
    • roundrobin
    • stdout
    • null
    • webhdfs
    • splunk
    • mongo
    • mongo_replset
    • relabel
    • rewrite_tag_filter
    • Others
  • Buffer Plugins
    • memory
    • file
  • Filter Plugins
    • record_transformer
    • grep
    • parser
    • stdout
  • Parser Plugins
    • regexp
    • apache2
    • apache_error
    • nginx
    • syslog
    • ltsv
    • csv
    • tsv
    • json
    • multiline
    • none
  • Formatter Plugins
    • out_file
    • json
    • ltsv
    • csv
    • msgpack
    • hash
    • single_value
  • Developer
    • Plugin Development
    • Community
    • Mailing List
    • Source Code
    • Bug Tracking
    • ChangeLog
    • Logo
  • Articles
    • Store Apache Logs into MongoDB
    • Apache To Riak
    • Store Apache Logs into Amazon S3
    • Before Install
    • Cep Norikra
    • Collect Glusterfs Logs
    • Common Log Formats
    • Docker Logging Efk Compose
    • Docker Logging
    • Filter Modify Apache
    • Forwarding Over Ssl
    • Free Alternative To Splunk By Fluentd
    • Data Collection to Hadoop (HDFS)
    • Data Analytics with Treasure Data
    • Install By Chef
    • Install By Deb
    • Install By Dmg
    • Install By Gem
    • Install By Rpm
    • Install From Source
    • Install On Beanstalk
    • Install On Heroku
    • Java
    • Kinesis Stream
    • Kubernetes Fluentd
    • Monitoring by Prometheus
    • Monitoring by Rest Api
    • Nodejs
    • Performance Tuning Multi Process
    • Performance Tuning Single Process
    • Perl
    • Php
    • Python
    • Quickstart
    • Raspberrypi Cloud Data Logger
    • Recipe Apache Logs To Elasticsearch
    • Recipe Apache Logs To Mongo
    • Recipe Apache Logs To S3
    • Recipe Apache Logs To Treasure Data
    • Recipe Cloudstack To Mongodb
    • Recipe Csv To Elasticsearch
    • Recipe Csv To Mongo
    • Recipe Csv To S3
    • Recipe Csv To Treasure Data
    • Recipe Http Rest Api To Elasticsearch
    • Recipe Http Rest Api To Mongo
    • Recipe Http Rest Api To S3
    • Recipe Http Rest Api To Treasure Data
    • Recipe Json To Elasticsearch
    • Recipe Json To Mongo
    • Recipe Json To S3
    • Recipe Json To Treasure Data
    • Recipe Nginx To Elasticsearch
    • Recipe Nginx To Mongo
    • Recipe Nginx To S3
    • Recipe Nginx To Treasure Data
    • Recipe Syslog To Elasticsearch
    • Recipe Syslog To Mongo
    • Recipe Syslog To S3
    • Recipe Syslog To Treasure Data
    • Recipe Tsv To Elasticsearch
    • Recipe Tsv To Mongo
    • Recipe Tsv To S3
    • Recipe Tsv To Treasure Data
    • Ruby
    • Scala
    • Splunk Like Grep And Alert Email
Powered by GitBook
On this page
  • Buffer Plugin Overview
  • Buffer Structure
  • Common Parameters
  • Handling queue overflow
  • Handling write failures
  • Slicing Data by Time
  • FAQ
  • List of Buffer Plugins

Was this helpful?

Buffer Plugins

PreviousOthersNextmemory

Last updated 5 years ago

Was this helpful?

Fluentd has 6 types of plugins: , , , , and . This article will provide a high-level overview of Buffer plugins.

Buffer Plugin Overview

Buffer plugins are used by output plugins. For example, out_s3 uses buf_file plugin by default to store incoming stream temporally before transmitting to S3.

Buffer plugins are, as you can tell by the name, pluggable. So you can choose a suitable backend based on your system requirements.

Buffer Structure

A buffer is essentially a set of "chunks". A chunk is a collection of records concatenated into a single blob. Chunks are periodically flushed to the output queue and then sent to the specified destination. Here is the diagram of how it works:

Common Parameters

Buffer plugins allow fine-grained controls over the buffering behaviours through config options.

buffer_type

  • This option specifies which plugin to use as the backend.

  • In default installations, supported values are file and memory.

buffer_chunk_limit

  • The maximum size of a chunk allowed (default: 8MB)

  • If a chunk grows more than the limit, it gets flushed to the output

    queue automatically.

buffer_queue_limit

  • The maximum length of the output queue (default: 256)

  • If the limit gets exceeded, Fluentd will invoke an error handling

    mechanism (See "Handling queue overflow" below for details).

flush_interval

  • The interval in seconds to wait before invoking the next buffer

    flush (default: 60)

Handling queue overflow

The buffer_queue_full_action option controls the behaviour when the queue becomes full. For now, three modes are supported:

  1. exception (default)

    • This mode raises a BufferQueueLimitError exception to the

      input plugin.

    • This mode is suitable for data streaming.

    • It's up to the input plugin to decide how to handle raised

      exceptions.

  2. block

    • This mode blocks the thread until the free space is vacated.

    • This mode is good for batch-like use-case.

    • DO NOT use this option casually just for avoiding

      BufferQueueLimitError exceptions (see the deployment tips

      below).

  3. drop_oldest_chunk

    • This mode drops the oldest chunks.

    • This mode might be useful for monitoring systems, since newer

      events are much more important than the older ones in this

      context.

Deployment tips

If your Fluentd daemon experiences overflows frequently, it means that your destination is insufficient for your traffic. There are several measures you can take:

  1. Upgrade the destination node to provide enough data-processing

    capacity.

  2. Use an @ERROR label to route overflowed events to another backup

    destination.

  3. Use a <secondary> tag to route overflowed events to another backup

    destination.

Handling write failures

The chunks in the output queue are written out to the destination one by one. However, the problem is that there might occur an error while writing out a chunk. For such cases, buffer plugins are equipped with a "retry" mechanism that handles write failures gracefully.

Here is how it works. If Fluentd fails to write out a chunk, the chunk will not be purged from the queue, and then, after a certain interval, Fluentd will retry to write the chunk again. The intervals between retry attempts are determined by the exponential backoff algorithm, and we can control the behaviour finely through the following options:

retry_limit

  • The maximum number of retries for sending a chunk (default: 17)

  • If retry_limit is exceeded, Fluentd will discard the given chunk.

disable_retry_limit

  • If set true, it disables retry_limit and make Fluentd retry

    indefinitely (default: false).

retry_wait

  • The number of seconds the first retry will wait (default: 1.0)

  • This is the seed value used by the exponential backoff algorithm;

    The wait interval will be doubled on each retry.

max_retry_wait

  • The maximum interval seconds to wait between retries (default:

    unset)

  • If the wait interval reaches this limit, the exponentiation stops.

Slicing Data by Time

Buffer plugins support a special mode that groups the incoming data by time frames. For example, you can group the incoming access logs by date and save them to separate files. Under this mode, a buffer plugin will behave quite differently in a few key aspects:

  1. It supports the additional time_slice_* options (See Q1/Q2 below

    for details)

  2. Chunks will be flushed "lazily" based on the settings of the

    time_slice_format option. See Q2 for the relationship of this

    option and flush_interval.

  3. The default value of buffer_chunk_limit becomes 256mb.

FAQ

Q1. How do we specify the granularity of time chunks?

This is done through the time_slice_format option, which is set to "%Y%m%d" (daily) by default. If you want your chunks to be hourly, "%Y%m%d%H" will do the job.

Q2. What if new logs come after the time corresponding the current chunk?

For example, what happens to an event, timestamped at 2013-01-01 02:59:45 UTC, comes in at 2013-01-01 03:00:15 UTC? Would it make into the 2013-01-01 02:00:00-02:59:59 chunk?

This issue is addressed by setting the time_slice_wait parameter. time_slice_wait sets, in seconds, how long fluentd waits to accept "late" events into the chunk past the max time corresponding to that chunk. The default value is 600, which means it waits for 10 minutes before moving on. So, in the current example, as long as the events come in before 2013-01-01 03:10:00, it will make it in to the 2013-01-01 02:00:00-02:59:59 chunk.

Alternatively, you can also flush the chunks regularly using flush_interval. Note that flush_interval and time_slice_wait are mutually exclusive. If you set flush_interval, time_slice_wait will be ignored and fluentd would issue a warning.

List of Buffer Plugins

Normally, the output plugin determines in which mode the buffer plugin operates. For example, out_s3 and out_file will enable the time-slicing mode. For the list of output plugins which enable the time-slicing mode, see .

If this article is incorrect or outdated, or omits critical information, please . is a open source project under . All components are available under the Apache 2 License.

buf_memory
buf_file
let us know
Fluentd
Cloud Native Computing Foundation (CNCF)
Input
Parser
Filter
Output
Formatter
Buffer
this page