Send Apache Logs to S3
Fluentd does three (3) things:
- 1.It continuously "tails" the access log file.
- 2.It parses the incoming log entries into meaningful fields (such as
path, etc.) and buffers them.
- 3.It writes the buffered data to Amazon S3 periodically.
For simplicity, this article will describe how to set up a one-node configuration. Please install the following software on the same node:
- Your Amazon Web Services Account
- Apache (with the Combined Log Format)
The Amazon S3 Output plugin is included in the latest version of Fluentd's deb/rpm package. If you want to use RubyGems to install the plugin, please use
gem install fluent-plugin-s3.
Let's start configuring Fluentd. If you used the deb/rpm package, Fluentd's config file is located at
/etc/td-agent/td-agent.conf. Otherwise, it is located at
For the input source, we will set up Fluentd to track the recent Apache logs (typically found at
/var/log/apache2/access_log). The Fluentd configuration file should look like this:
Let's go through the configuration line by line:
type tail: The tail Input plugin continuously tracks the log file.This handy plugin is included in Fluentd's core.
<parse>: Uses Fluentd's built-in Apache log parser.
path /var/log/apache2/access_log: The location of the Apache log.This may be different for your particular system.
s3.apache.accessis used as the tag to route themessages within Fluentd.
That's it! You should now be able to output a JSON-formatted data stream for Fluentd to process.
The output destination will be Amazon S3. The output configuration should look like this:
timekey 3600 # 1 hour
The match section specifies the regexp used to look for matching tags. If a matching tag is found in a log, then the config inside
<match>...</match>is used (i.e. the log is routed according to the config inside). In this example, the
s3.apache.accesstag (generated by
tail) is always used.
To test the configuration, just ping the Apache server. This example uses the
ab(Apache Bench) program:
$ ab -n 100 -c 10 http://localhost/
WARNING: By default, files are created on an hourly basis (around xx:10). This means that when you first import records using the plugin, no file is created immediately. The file will be created when the
timekeycondition has been met. To change the output frequency, please modify the
timekeyvalue. To write objects every minute, please use
timekey 60with smaller
Fluentd + Amazon S3 makes real-time log archiving simple.