elasticsearch
Last updated
Was this helpful?
Last updated
Was this helpful?
The out_elasticsearch
Output plugin writes records into Elasticsearch. By default, it creates records using which performs multiple indexing operations in a single API call. This reduces overhead and can greatly increase indexing speed. This means that when you first import records using the plugin, records are not immediately pushed to Elasticsearch.
Records will be sent to Elasticsearch when the chunk_keys
condition has been met. To change the output frequency, please specify the time
in chunk_keys
and specify timekey
value in the configuration.
This document does not describe all the parameters. For details, refer to the Further Reading section.
Since out_elasticsearch
has been included in the standard distribution of td-agent
since v3.0.1, td-agent
users do not need to install it manually.
If you have installed Fluentd without td-agent
, please install this plugin using fluent-gem
:
Here is a simple working configuration which should serve as a good starting point for most users:
@type
(required)This option must be always elasticsearch
.
host
(optional)The hostname of your Elasticsearch node (default: localhost
).
port
(optional)The port number of your Elasticsearch node (default: 9200
).
hosts
(optional)If you want to connect to more than one Elasticsearch nodes, specify this option in the following format:
If you use this option, the host
and port
options are ignored.
user
, password
(optional)The login credentials to connect to the Elasticsearch node (default: nil
):
scheme
(optional)Specify https
if your Elasticsearch endpoint supports SSL (default: http
).
path
(optional)The REST API endpoint of Elasticsearch to post write requests (default: nil
).
index_name
(optional)The index name to write events to (default: fluentd
).
This option supports the placeholder syntax of Fluentd plugin API. For example, if you want to partition the index by tags, you can specify it like this:
Here is a more practical example which partitions the Elasticsearch index by tags and timestamps:
Time placeholder needs to set up tag and time in chunk_keys
. Also, it needs to specify timekey for time slice of chunk:
logstash_format
(optional)If true
, Fluentd uses the conventional index name format logstash-%Y.%m.%d
(default: false
). This option supersedes the index_name
option.
@log_level
optionThe @log_level
option allows the user to set different levels of logging for each plugin.
Supported log levels: fatal
, error
, warn
, info
, debug
, trace
.
logstash_prefix
(optional)The logstash prefix index name to write events when logstash_format
is true
(default: logstash
).
You can use %{}
style placeholders to escape for URL encoding needed characters.
Valid configuration:
Valid configuration:
Invalid configuration:
For common output / buffer parameters, please check the following articles:
For more details on each option, read the section on .
For more information about buffer options checkout the .
Please see the for further details.
Please refer to the section.
If this article is incorrect or outdated, or omits critical information, please . is an open-source project under . All components are available under the Apache 2 License.