Data Collection with Hadoop (HDFS)
Last updated
Last updated
This article explains how to use Fluentd's WebHDFS Output plugin to aggregate semi-structured logs into Hadoop HDFS.
Fluentd is an advanced open-source log collector originally developed at Treasure Data, Inc. Fluentd is specifically designed to solve the big-data log collection problem. A lot of users are using Fluentd with MongoDB, and have found that it doesn't scale well for now.
HDFS (Hadoop) is a natural alternative for storing and processing a huge amount of data. It supports an HTTP interface called WebHDFS in addition to its Java library.
This article will show you how to use Fluentd to receive data from HTTP and stream it into HDFS.
The figure below shows the high-level architecture:
For simplicity, this article will describe how to set up a one-node configuration. Please install the following software on the same node:
Apache HDFS
The WebHDFS Output plugin is included in the latest version of Fluentd's deb/rpm package (v1.1.10 or later). If you want to use RubyGems to install the plugin, please use gem install fluent-plugin-webhdfs
.
For CDH, please refer to the downloads page
Let's start configuring Fluentd. If you used the deb/rpm package, Fluentd's config file is located at /etc/td-agent/td-agent.conf
. Otherwise, it is located at /etc/fluentd/fluentd.conf
.
For the input source, we will set up Fluentd to accept records from HTTP. The Fluentd configuration file should look like this:
The output destination will be WebHDFS. The output configuration should look like this:
The <match>
section specifies the regexp used to look for matching tags. If a tag in a log is matched, the respective match
configuration is used (i.e. the log is routed accordingly).
The flush_interval
parameter specifies how often the data is written to HDFS. An append operation is used to append the incoming data to the file specified by the path
parameter.
Placeholders for both time and hostname can be used with the path
parameter. This prevents multiple Fluentd instances from appending data to the same file, which must be avoided for append operations.
Other options specify HDFS's NameNode host and port.
Append operations are not enabled by default. Please put these configurations into your hdfs-site.xml
file and restart the whole cluster:
Please confirm that the HDFS user has the write access to the path
specified as the WebHDFS output.
To test the configuration, just post the JSON to Fluentd (we use the curl
command in this example). Sending a USR1
signal flushes Fluentd's buffer into WebHDFS:
We can then access HDFS to see the stored data:
Fluentd with WebHDFS makes the realtime log collection simple, robust and scalable! @tagomoris has already been using this plugin to collect 20,000 msgs/sec, 1.5 TB/day without any major problems for several months now.
If this article is incorrect or outdated, or omits critical information, please let us know. Fluentd is an open-source project under Cloud Native Computing Foundation (CNCF). All components are available under the Apache 2 License.