Fluentd
0.12
0.12
  • Introduction
  • Overview
    • Getting Started
    • Installation
    • Life of a Fluentd event
    • Support
    • FAQ
  • Use Cases
    • Centralized App Logging
    • Monitoring Service Logs
    • Data Analytics
    • Connecting to Data Storages
    • Stream Processing
    • Windows Event Collection
    • IoT Data Logger
  • Configuration
    • Config File Syntax
    • Routing Examples
    • Recipes
  • Deployment
    • Logging
    • Monitoring
    • Signals
    • RPC
    • High Availability Config
    • Failure Scenarios
    • Performance Tuning
    • Plugin Management
    • Trouble Shooting
    • Secure Forwarding
    • Fluentd UI
    • Command Line Option
  • Container Deployment
    • Docker Image
    • Docker Logging Driver
    • Docker Compose
    • Kubernetes
  • Input Plugins
    • tail
    • forward
    • secure_forward
    • udp
    • tcp
    • http
    • unix
    • syslog
    • exec
    • scribe
    • multiprocess
    • dummy
    • Others
  • Output Plugins
    • file
    • s3
    • kafka
    • forward
    • secure_forward
    • exec
    • exec_filter
    • copy
    • geoip
    • roundrobin
    • stdout
    • null
    • webhdfs
    • splunk
    • mongo
    • mongo_replset
    • relabel
    • rewrite_tag_filter
    • Others
  • Buffer Plugins
    • memory
    • file
  • Filter Plugins
    • record_transformer
    • grep
    • parser
    • stdout
  • Parser Plugins
    • regexp
    • apache2
    • apache_error
    • nginx
    • syslog
    • ltsv
    • csv
    • tsv
    • json
    • multiline
    • none
  • Formatter Plugins
    • out_file
    • json
    • ltsv
    • csv
    • msgpack
    • hash
    • single_value
  • Developer
    • Plugin Development
    • Community
    • Mailing List
    • Source Code
    • Bug Tracking
    • ChangeLog
    • Logo
  • Articles
    • Store Apache Logs into MongoDB
    • Apache To Riak
    • Store Apache Logs into Amazon S3
    • Before Install
    • Cep Norikra
    • Collect Glusterfs Logs
    • Common Log Formats
    • Docker Logging Efk Compose
    • Docker Logging
    • Filter Modify Apache
    • Forwarding Over Ssl
    • Free Alternative To Splunk By Fluentd
    • Data Collection to Hadoop (HDFS)
    • Data Analytics with Treasure Data
    • Install By Chef
    • Install By Deb
    • Install By Dmg
    • Install By Gem
    • Install By Rpm
    • Install From Source
    • Install On Beanstalk
    • Install On Heroku
    • Java
    • Kinesis Stream
    • Kubernetes Fluentd
    • Monitoring by Prometheus
    • Monitoring by Rest Api
    • Nodejs
    • Performance Tuning Multi Process
    • Performance Tuning Single Process
    • Perl
    • Php
    • Python
    • Quickstart
    • Raspberrypi Cloud Data Logger
    • Recipe Apache Logs To Elasticsearch
    • Recipe Apache Logs To Mongo
    • Recipe Apache Logs To S3
    • Recipe Apache Logs To Treasure Data
    • Recipe Cloudstack To Mongodb
    • Recipe Csv To Elasticsearch
    • Recipe Csv To Mongo
    • Recipe Csv To S3
    • Recipe Csv To Treasure Data
    • Recipe Http Rest Api To Elasticsearch
    • Recipe Http Rest Api To Mongo
    • Recipe Http Rest Api To S3
    • Recipe Http Rest Api To Treasure Data
    • Recipe Json To Elasticsearch
    • Recipe Json To Mongo
    • Recipe Json To S3
    • Recipe Json To Treasure Data
    • Recipe Nginx To Elasticsearch
    • Recipe Nginx To Mongo
    • Recipe Nginx To S3
    • Recipe Nginx To Treasure Data
    • Recipe Syslog To Elasticsearch
    • Recipe Syslog To Mongo
    • Recipe Syslog To S3
    • Recipe Syslog To Treasure Data
    • Recipe Tsv To Elasticsearch
    • Recipe Tsv To Mongo
    • Recipe Tsv To S3
    • Recipe Tsv To Treasure Data
    • Ruby
    • Scala
    • Splunk Like Grep And Alert Email
Powered by GitBook
On this page
  • Example Configuration
  • Parameters
  • type (required)
  • path (required)
  • append
  • format
  • time_format
  • utc
  • compress
  • symlink_path
  • Time Sliced Output Parameters
  • time_slice_format
  • time_slice_wait
  • buffer_type
  • buffer_queue_limit, buffer_chunk_limit
  • flush_interval
  • flush_at_shutdown
  • retry_wait, max_retry_wait
  • retry_limit, disable_retry_limit
  • num_threads
  • slow_flush_log_threshold

Was this helpful?

  1. Output Plugins

file

PreviousOutput PluginsNexts3

Last updated 5 years ago

Was this helpful?

The out_file TimeSliced Output plugin writes events to files. By default, it creates files on a daily basis (around 00:10). This means that when you first import records using the plugin, no file is created immediately. The file will be created when the time_slice_format condition has been met. To change the output frequency, please modify the time_slice_format value.

Example Configuration

out_file is included in Fluentd's core. No additional installation process is required.

<match pattern>
  @type file
  path /var/log/fluent/myapp
  time_slice_format %Y%m%d
  time_slice_wait 10m
  time_format %Y%m%dT%H%M%S%z
  compress gzip
  utc
</match>

Parameters

type (required)

The value must be file.

path (required)

The Path of the file. The actual path is path + time + ".log". The time portion is determined by the time_slice_format parameter, descried below.

The path parameter is used as buffer_path in this plugin.

append

The flushed chunk is appended to existence file or not. The default is false. By default, out_file flushes each chunk to different path.

# append false
log.20140608_0.log
log.20140608_1.log
log.20140609_0.log
log.20140609_1.log

This makes parallel file processing easy. But if you want to disable this behaviour, you can disable it by setting append true.

# append true
log.20140608.log
log.20140609.log

format

The format of the file content. The default is out_file.

time_format

The format of the time written in files. The default format is ISO-8601.

utc

Uses UTC for path formatting. The default format is localtime.

compress

Compresses flushed files using gzip. No compression is performed by default.

symlink_path

Create symlink to temporary buffered file when buffer_type is file. No symlink is created by default. This is useful for tailing file content to check logs.

Time Sliced Output Parameters

For advanced usage, you can tune Fluentd's internal buffering mechanism with these parameters.

time_slice_format

The time format used as part of the file name. The following characters are replaced with actual values when the file is created:

  • \%Y: year including the century (at least 4 digits)

  • \%m: month of the year (01..12)

  • \%d: Day of the month (01..31)

  • \%H: Hour of the day, 24-hour clock (00..23)

  • \%M: Minute of the hour (00..59)

  • \%S: Second of the minute (00..60)

The default format is %Y%m%d%H, which creates one file per hour.

time_slice_wait

The amount of time Fluentd will wait for old logs to arrive. This is used to account for delays in logs arriving to your Fluentd node. The default wait time is 10 minutes ('10m'), where Fluentd will wait until 10 minutes past the hour for any logs that occurred within the past hour.

For example, when splitting files on an hourly basis, a log recorded at 1:59 but arriving at the Fluentd node between 2:00 and 2:10 will be uploaded together with all the other logs from 1:00 to 1:59 in one transaction, avoiding extra overhead. Larger values can be set as needed.

buffer_type

buffer_queue_limit, buffer_chunk_limit

flush_interval

The interval between data flushes. The default is 60s. The suffixes "s" (seconds), "m" (minutes), and "h" (hours) can be used.

flush_at_shutdown

If set to true, Fluentd waits for the buffer to flush at shutdown. By default, it is set to true for Memory Buffer and false for File Buffer.

retry_wait, max_retry_wait

The initial and maximum intervals between write retries. The default values are 1.0 and unset (no limit). The interval doubles (with +/-12.5% randomness) every retry until max_retry_wait is reached.

Since td-agent will retry 17 times before giving up by default (see the retry_limit parameter for details), the sleep interval can be up to approximately 131072 seconds (roughly 36 hours) in the default configurations.

retry_limit, disable_retry_limit

The limit on the number of retries before buffered data is discarded, and an option to disable that limit (if true, the value of retry_limit is ignored and there is no limit). The default values are 17 and false (not disabled). If the limit is reached, buffered data is discarded and the retry interval is reset to its initial value (retry_wait).

num_threads

The number of threads to flush the buffer. This option can be used to parallelize writes into the output(s) designated by the output plugin. The default is 1.

slow_flush_log_threshold

Same as Buffered Output but default value is changed to 40.0 seconds.

log_level option

The log_level option allows the user to set different levels of logging for each plugin. The supported log levels are: fatal, error, warn, info, debug, and trace.

Please see the article for the basic structure and syntax of the configuration file.

Initially, you may see a file which looks like \"/path/to/file.20140101.log.b4eea2c8166b147a0\". This is an intermediate buffer file (\"b4eea2c8166b147a0\" identifies the buffer). Once the content of the buffer has been completely , you will see the output file without the trailing identifier.

See for more detail.

The buffer type is file by default (). The memory () buffer type can be chosen as well. If you use file buffer type, buffer_path parameter is required.

The length of the chunk queue and the size of each chunk, respectively. Please see the article for the basic buffer structure. The default values are 64 and 8m, respectively. The suffixes "k" (KB), "m" (MB), and "g" (GB) can be used for buffer_chunk_limit.

Please see the for further details.

If this article is incorrect or outdated, or omits critical information, please . is a open source project under . All components are available under the Apache 2 License.

Config File
flushed
formatter article
buf_file
buf_memory
Buffer Plugin Overview
logging article
let us know
Fluentd
Cloud Native Computing Foundation (CNCF)