Fluentd
1.0
1.0
  • Introduction
  • Overview
    • Life of a Fluentd event
    • Support
    • FAQ
    • Logo
    • fluent-package v5 vs td-agent v4
  • Installation
    • Before Installation
    • Install fluent-package
      • RPM Package (Red Hat Linux)
      • DEB Package (Debian/Ubuntu)
      • .dmg Package (macOS)
      • .msi Installer (Windows)
    • Install calyptia-fluentd
      • RPM Package (Red Hat Linux)
      • DEB Package (Debian/Ubuntu)
      • .dmg Package (macOS)
      • .msi Installer (Windows)
    • Install by Ruby Gem
    • Install from Source
    • Post Installation Guide
    • Obsolete Installation
      • Treasure Agent v4 (EOL) Installation
        • Install by RPM Package v4 (Red Hat Linux)
        • Install by DEB Package v4 (Debian/Ubuntu)
        • Install by .dmg Package v4 (macOS)
        • Install by .msi Installer v4 (Windows)
      • Treasure Agent v3 (EOL) Installation
        • Install by RPM Package v3 (Red Hat Linux)
        • Install by DEB Package v3 (Debian/Ubuntu)
        • Install by .dmg Package v3 (macOS)
        • Install by .msi Installer v3 (Windows)
  • Configuration
    • Config File Syntax
    • Config File Syntax (YAML)
    • Routing Examples
    • Config: Common Parameters
    • Config: Parse Section
    • Config: Buffer Section
    • Config: Format Section
    • Config: Extract Section
    • Config: Inject Section
    • Config: Transport Section
    • Config: Storage Section
    • Config: Service Discovery Section
  • Deployment
    • System Configuration
    • Logging
    • Signals
    • RPC
    • High Availability Config
    • Performance Tuning
    • Multi Process Workers
    • Failure Scenarios
    • Plugin Management
    • Trouble Shooting
    • Fluentd UI
    • Linux Capability
    • Command Line Option
    • Source Only Mode
    • Zero-downtime restart
  • Container Deployment
    • Docker Image
    • Docker Logging Driver
    • Docker Compose
    • Kubernetes
  • Monitoring Fluentd
    • Overview
    • Monitoring by Prometheus
    • Monitoring by REST API
  • Input Plugins
    • tail
    • forward
    • udp
    • tcp
    • unix
    • http
    • syslog
    • exec
    • sample
    • monitor_agent
    • windows_eventlog
  • Output Plugins
    • file
    • forward
    • http
    • exec
    • exec_filter
    • secondary_file
    • copy
    • relabel
    • roundrobin
    • stdout
    • null
    • s3
    • kafka
    • elasticsearch
    • opensearch
    • mongo
    • mongo_replset
    • rewrite_tag_filter
    • webhdfs
    • buffer
  • Filter Plugins
    • record_transformer
    • grep
    • parser
    • geoip
    • stdout
  • Parser Plugins
    • regexp
    • apache2
    • apache_error
    • nginx
    • syslog
    • ltsv
    • csv
    • tsv
    • json
    • msgpack
    • multiline
    • none
  • Formatter Plugins
    • out_file
    • json
    • ltsv
    • csv
    • msgpack
    • hash
    • single_value
    • stdout
    • tsv
  • Buffer Plugins
    • memory
    • file
    • file_single
  • Storage Plugins
    • local
  • Service Discovery Plugins
    • static
    • file
    • srv
  • Metrics Plugins
    • local
  • How-to Guides
    • Stream Analytics with Materialize
    • Send Apache Logs to S3
    • Send Apache Logs to Minio
    • Send Apache Logs to Mongodb
    • Send Syslog Data to Graylog
    • Send Syslog Data to InfluxDB
    • Send Syslog Data to Sematext
    • Data Analytics with Treasure Data
    • Data Collection with Hadoop (HDFS)
    • Simple Stream Processing with Fluentd
    • Stream Processing with Norikra
    • Stream Processing with Kinesis
    • Free Alternative To Splunk
    • Email Alerting like Splunk
    • How to Parse Syslog Messages
    • Cloud Data Logging with Raspberry Pi
  • Language Bindings
    • Java
    • Ruby
    • Python
    • Perl
    • PHP
    • Nodejs
    • Scala
  • Plugin Development
    • How to Write Input Plugin
    • How to Write Base Plugin
    • How to Write Buffer Plugin
    • How to Write Filter Plugin
    • How to Write Formatter Plugin
    • How to Write Output Plugin
    • How to Write Parser Plugin
    • How to Write Storage Plugin
    • How to Write Service Discovery Plugin
    • How to Write Tests for Plugin
    • Configuration Parameter Types
    • Upgrade Plugin from v0.12
  • Plugin Helper API
    • Plugin Helper: Child Process
    • Plugin Helper: Compat Parameters
    • Plugin Helper: Event Emitter
    • Plugin Helper: Event Loop
    • Plugin Helper: Extract
    • Plugin Helper: Formatter
    • Plugin Helper: Inject
    • Plugin Helper: Parser
    • Plugin Helper: Record Accessor
    • Plugin Helper: Server
    • Plugin Helper: Socket
    • Plugin Helper: Storage
    • Plugin Helper: Thread
    • Plugin Helper: Timer
    • Plugin Helper: Http Server
    • Plugin Helper: Service Discovery
  • Troubleshooting Guide
  • Appendix
    • Update from v0.12 to v1
    • td-agent v2 vs v3 vs v4
Powered by GitBook
On this page
  • Installation
  • Example Configuration
  • Parameters
  • @type (required)
  • aws_key_id
  • aws_sec_key
  • Credentials on AWS environment
  • s3_bucket
  • buffer
  • s3_region
  • <format> Directive
  • path
  • s3_object_key_format
  • store_as
  • proxy_uri
  • ssl_verify_peer
  • Further Reading

Was this helpful?

  1. Output Plugins

s3

PreviousnullNextkafka

Last updated 3 years ago

Was this helpful?

The out_s3 Output plugin writes records into the Amazon S3 cloud object storage service. By default, it creates files on an hourly basis. This means that when you first import records using the plugin, no file is created immediately.

The file will be created when the timekey condition has been met. To change the output frequency, please modify the timekey value in the buffer section. For more details, see .

This document doesn't describe all parameters. If you want to know full features, check the Further Reading section.

Installation

Example Configuration

<match pattern>
  @type s3

  aws_key_id YOUR_AWS_KEY_ID
  aws_sec_key YOUR_AWS_SECRET_KEY
  s3_bucket YOUR_S3_BUCKET_NAME
  s3_region ap-northeast-1
  path logs/
  # if you want to use ${tag} or %Y/%m/%d/ like syntax in path / s3_object_key_format,
  # need to specify tag for ${tag} and time for %Y/%m/%d in <buffer> argument.
  <buffer tag,time>
    @type file
    path /var/log/fluent/s3
    timekey 3600 # 1 hour partition
    timekey_wait 10m
    timekey_use_utc true # use utc
    chunk_limit_size 256m
  </buffer>
</match>

Parameters

@type (required)

The value must be s3.

aws_key_id

type

default

version

string

required/optional

1.0.0

The AWS access key id. This parameter is required when your agent is not running on an EC2 instance with an IAM Instance Profile.

aws_sec_key

type

default

version

string

required/optional

1.0.0

The AWS secret key. This parameter is required when your agent is not running on an EC2 instance with an IAM Instance Profile.

Credentials on AWS environment

S3 plugin supports several credentials.

s3_bucket

type

default

version

string

required

1.0.0

The Amazon S3 bucket name.

buffer

The buffer of the S3 plugin. The default is the time-sliced buffer.

s3_region

type

default

version

string

ENV["AWS_REGION"] or us-east-1

1.0.0

The Amazon S3 region name. Please select the appropriate region name and confirm that your bucket has been created in the correct region.

Here are some regions:

  • us-east-1

  • us-west-1

  • eu-central-1

  • ap-southeast-1

  • sa-east-1

<format> Directive

The format of the object content. The default is out_file.

JSON example:

<format>
  @type json
</format>

path

type

default

version

string

""

1.0.0

The path prefix of the files on S3. The default is "" (no prefix).

The actual path on S3 will be: {path}{time_slice_format}_{sequential_index}.gz (see s3_object_key_format) by default.

s3_object_key_format

type

default

version

string

%{path}%{time_slice}_%{index}.%{file_extension}

1.0.0

The actual S3 path. This is interpolated to the actual path (e.g. Ruby's variable interpolation):

  • path: the value of the path parameter above

  • time_slice: the time string as formatted by buffer configuration

  • index: the index for the given path. Incremented per buffer flush

  • file_extension: as determined by the store_as parameter.

For example, if:

  • s3_object_key_format: default

  • path: hello

  • time_slice: %Y%m%d

  • store_as: json

Then, hello20141111_0.json would be the example of an actual S3 path.

This parameter is for advanced users. Most users should NOT modify it. Also, always make sure that %{index} appears in the customized s3_object_key_format (Otherwise, multiple buffer flushes within the same time slice throws an error).

store_as

type

default

version

string

"gzip"

1.0.0

The compression type.

Supported types: lzo, json, txt

proxy_uri

type

default

version

string

nil

1.0.0

The proxy URL.

ssl_verify_peer

type

default

version

bool

true

1.0.0

Verify the SSL certificate of the endpoint. If false, the endpoint SSL certificate is ignored.

@log_level

It allows the user to set different levels of logging for each plugin.

Supported levels: fatal, error, warn, info, debug, trace

Further Reading

This page does not describe all the possible configurations. For more details, follow this:

out_s3 is included in td-agent by default. Fluentd gem users will need to install the fluent-plugin-s3 gem. In order to install it, please refer to the article.

Please see the article for real-world use cases.

Please see the article for the basic structure and syntax of the configuration file.

For <buffer> section, see . By default, this plugin uses the buffer.

See .

For more details, see .

See the complete list of regions .

See article for more detail.

Please see the article for further details.

If this article is incorrect or outdated, or omits critical information, please . is an open-source project under . All components are available under the Apache 2 License.

Plugin Management
Store Apache Logs into Amazon S3
Config File
Buffer Section Configuration
file
README
buffer
here
formatter
logging
fluent-plugin-s3
let us know
Fluentd
Cloud Native Computing Foundation (CNCF)
time chunk keys