Stream Processing with Kinesis
Fluentd + Kinesis
Amazon Kinesis is a platform for streaming data on AWS, offering powerful services to make it easy to load and analyze streaming data, and also providing the ability for you to build custom streaming data applications for specialized needs.
In this example, Fluentd does three (3) things:
- 1.It continuously "tails" the access log file.
- 2.It parses the incoming log entries into meaningful fields (such as
path, etc.) and buffers them.
- 3.It writes the buffered data to Amazon Kinesis periodically.
For simplicity, this article will describe how to set up a one-node configuration. Please install the following software on the same node:
You can install Fluentd via major packaging systems.
Since Amazon Kinesis plugin is not bundled with
td-agentpackage, please install it manually:
$ sudo /usr/sbin/td-agent-gem install fluent-plugin-kinesis
Let's start configuring Fluentd. If you used the deb/rpm package, Fluentd's config file is located at
/etc/td-agent/td-agent.conf. Otherwise, it is located at
For the input source, we will set up Fluentd to track the recent Apache logs (typically found at
/var/log/apache2/access_log). The Fluentd configuration file should look like this:
Let's go through the configuration line by line:
@type tail: The tail Input plugin continuously tracks the logfile. This handy plugin is included in Fluentd's core.
<parse>: Uses Fluentd's built-in Apache log parser.
path /var/log/apache2/access_log: The location of the Apache log.This may be different for your particular system.
kinesis.apache.accessis used as thetag to route the messages within Fluentd.
That's it! You should now be able to output a JSON-formatted data stream for Fluentd to process.
The output destination will be Amazon Kinesis. The output configuration should look like this:
# plugin type
# your kinesis stream name
# AWS credentials
# AWS region
# Use random value for the partition key
# Frequency of ingestion
# Parallelism of ingestion
The match section specifies the regexp used to look for matching tags. If a matching tag is found in a log, then the config inside
<match>...</match>is used (i.e. the log is routed according to the config inside). In this example, the
kinesis.apache.accesstag (generated by
tail) is always used.
match.**matches zero or more period-delimited tag parts (e.g.
flush_intervalparameter specifies how often the data is written to Kinesis.
random_partition_key trueoption will generate the partition key via UUID v3 (source). Kinesis Stream consists of
shards, and the processing power of each shard is limited. This partition key will be used by Kinesis to determine which shard has been designated for a specific record.
For those who are interested in security, all communication between Fluentd and Amazon Kinesis are done via HTTPS. If you do not want to have AES keys in the configuration file, IAM Role-based authentication is available too for EC2 nodes.
td-agentto make sure that the configuration change is available:
$ sudo /etc/init.d/td-agent restart
$ sudo systemctl restart td-agent.service
To test the configuration, just have a couple of accesses to your Apache server. This example uses the
ab(Apache Bench) program:
$ ab -n 100 -c 10 http://localhost/
Fluentd with Amazon Kinesis makes the realtime log collection simple, easy, and robust.