elasticsearch
Last updated
Last updated
The out_elasticsearch
Output plugin writes records into Elasticsearch. By default, it creates records using bulk api which performs multiple indexing operations in a single API call. This reduces overhead and can greatly increase indexing speed. This means that when you first import records using the plugin, records are not immediately pushed to Elasticsearch.
Records will be sent to Elasticsearch when the chunk_keys
condition has been met. To change the output frequency, please specify the time
in chunk_keys
and specify timekey
value in the configuration.
This document does not describe all the parameters. For details, refer to the Further Reading section.
Since out_elasticsearch
has been included in the standard distribution of td-agent
since v3.0.1, td-agent
users do not need to install it manually.
If you have installed Fluentd without td-agent
, please install this plugin using fluent-gem
:
Here is a simple working configuration which should serve as a good starting point for most users:
For more details on each option, read the section on Parameters.
@type
(required)This option must be always elasticsearch
.
host
(optional)The hostname of your Elasticsearch node (default: localhost
).
port
(optional)The port number of your Elasticsearch node (default: 9200
).
hosts
(optional)If you want to connect to more than one Elasticsearch nodes, specify this option in the following format:
If you use this option, the host
and port
options are ignored.
user
, password
(optional)The login credentials to connect to the Elasticsearch node (default: nil
):
scheme
(optional)Specify https
if your Elasticsearch endpoint supports SSL (default: http
).
path
(optional)The REST API endpoint of Elasticsearch to post write requests (default: nil
).
index_name
(optional)The index name to write events to (default: fluentd
).
This option supports the placeholder syntax of Fluentd plugin API. For example, if you want to partition the index by tags, you can specify it like this:
Here is a more practical example which partitions the Elasticsearch index by tags and timestamps:
Time placeholder needs to set up tag and time in chunk_keys
. Also, it needs to specify timekey for time slice of chunk:
For more information about buffer options checkout the Buffer Section Configuration.
logstash_format
(optional)If true
, Fluentd uses the conventional index name format logstash-%Y.%m.%d
(default: false
). This option supersedes the index_name
option.
@log_level
optionThe @log_level
option allows the user to set different levels of logging for each plugin.
Supported log levels: fatal
, error
, warn
, info
, debug
, trace
.
Please see the logging article for further details.
logstash_prefix
(optional)The logstash prefix index name to write events when logstash_format
is true
(default: logstash
).
You can use %{}
style placeholders to escape for URL encoding needed characters.
Valid configuration:
Valid configuration:
Invalid configuration:
For common output / buffer parameters, please check the following articles:
Please refer to the Elasticsearch's troubleshooting section.
If this article is incorrect or outdated, or omits critical information, please let us know. Fluentd is an open-source project under Cloud Native Computing Foundation (CNCF). All components are available under the Apache 2 License.