Stream Processing with Kinesis

This article explains how to use Fluentd's Amazon Kinesis Output plugin (out_kinesis) to aggregate semi-structured logs in real-time. Kinesis plugin is developed and published by Amazon Web Services officially.

Background

Fluentd is an advanced open-source log collector originally developed at Treasure Data, Inc. Because Fluentd can collect logs from various sources, Amazon Kinesis is one of the popular destinations for the output.

Amazon Kinesis is a platform for streaming data on AWS, offering powerful services to make it easy to load and analyze streaming data, and also providing the ability for you to build custom streaming data applications for specialized needs.

This article will show you how to use Fluentd to import Apache logs into Amazon Kinesis.

Mechanism

In this example, Fluentd does three (3) things:

  1. It continuously "tails" the access log file.

  2. It parses the incoming log entries into meaningful fields (such as

    ip, path, etc.) and buffers them.

  3. It writes the buffered data to Amazon Kinesis periodically.

Install

For simplicity, this article will describe how to set up a one-node configuration. Please install the following software on the same node:

  • Apache (with the Combined Log Format)

You can install Fluentd via major packaging systems.

Install Kinesis Plugin

Since Amazon Kinesis plugin is not bundled with td-agent package, please install it manually:

$ sudo /usr/sbin/td-agent-gem install fluent-plugin-kinesis

Configuration

Let's start configuring Fluentd. If you used the deb/rpm package, Fluentd's config file is located at /etc/td-agent/td-agent.conf. Otherwise, it is located at /etc/fluentd/fluentd.conf.

Tail Input

For the input source, we will set up Fluentd to track the recent Apache logs (typically found at /var/log/apache2/access_log). The Fluentd configuration file should look like this:

<source>
  @type tail
  path /var/log/apache2/access_log
  pos_file /var/log/td-agent/apache2.access_log.pos
  <parse>
    @type apache2
  </parse>
  tag kinesis.apache.access
</source>

Please make sure that your Apache outputs are in the default combined format. format apache2 cannot parse custom log formats. Please see the in_tail article for more information.

Let's go through the configuration line by line:

  1. @type tail: The tail Input plugin continuously tracks the log

    file. This handy plugin is included in Fluentd's core.

  2. @type apache2 in <parse>: Uses Fluentd's built-in Apache log parser.

  3. path /var/log/apache2/access_log: The location of the Apache log.

    This may be different for your particular system.

  4. tag kinesis.apache.access: kinesis.apache.access is used as the

    tag to route the messages within Fluentd.

That's it! You should now be able to output a JSON-formatted data stream for Fluentd to process.

Amazon Kinesis Output

The output destination will be Amazon Kinesis. The output configuration should look like this:

<match **>
  # plugin type
  @type kinesis_streams

  # your kinesis stream name
  stream_name <KINESIS_STREAM_NAME>

  # AWS credentials
  aws_key_id <AWS_KEY_ID>
  aws_sec_key <AWS_SECRET_KEY>

  # AWS region
  region us-east-1

  # Use random value for the partition key
  random_partition_key true

  <buffer>
    # Frequency of ingestion
    flush_interval 5s
    # Parallelism of ingestion
    flush_thread_count
  </buffer>
</match>

The match section specifies the regexp used to look for matching tags. If a matching tag is found in a log, then the config inside <match>...</match> is used (i.e. the log is routed according to the config inside). In this example, the kinesis.apache.access tag (generated by tail) is always used.

The ** in match.** matches zero or more period-delimited tag parts (e.g. match/match.a/match.a.b).

The flush_interval parameter specifies how often the data is written to Kinesis.

The random_partition_key true option will generate the partition key via UUID v3 (source). Kinesis Stream consists of shards, and the processing power of each shard is limited. This partition key will be used by Kinesis to determine which shard has been designated for a specific record.

For additional configurations, see Kinesis Output plugin.

For those who are interested in security, all communication between Fluentd and Amazon Kinesis are done via HTTPS. If you do not want to have AES keys in the configuration file, IAM Role-based authentication is available too for EC2 nodes.

Test

Restart td-agent to make sure that the configuration change is available:

# init
$ sudo /etc/init.d/td-agent restart
# systemd
$ sudo systemctl restart td-agent.service

To test the configuration, just have a couple of accesses to your Apache server. This example uses the ab (Apache Bench) program:

$ ab -n 100 -c 10 http://localhost/

FAQs

Why we need Fluentd, while Kinesis also offers client libraries?

A lot of people use Fluentd + Kinesis, simply because they want to have more choices for inputs and outputs. For inputs, Fluentd has a lot more community-contributed plugins and libraries. For outputs, you can send not only Kinesis, but multiple destinations like Amazon S3, local file storage, etc.

Conclusion

Fluentd with Amazon Kinesis makes the realtime log collection simple, easy, and robust.

Learn More

If this article is incorrect or outdated, or omits critical information, please let us know. Fluentd is an open-source project under Cloud Native Computing Foundation (CNCF). All components are available under the Apache 2 License.

Last updated