Send Apache Logs to S3
Last updated
Was this helpful?
Last updated
Was this helpful?
This article explains how to use 's Amazon S3 Output plugin () to aggregate semi-structured logs in realtime.
is an advanced open-source log collector originally developed at . One of the main objectives of log aggregation is data archiving. , the cloud object storage provided by Amazon, is a popular solution for data archiving.
This article will show you how to use to import Apache logs into Amazon S3.
Fluentd does three (3) things:
It continuously "tails" the access log file.
It parses the incoming log entries into meaningful fields (such as ip
,
path
, etc.) and buffers them.
It writes the buffered data to Amazon S3 periodically.
The following software/services are required to be set up correctly:
Your Amazon Web Services Account
(with the Combined Log Format)
For simplicity, this article will describe how to set up a one-node configuration. Please install the above prerequisites software on the same node.
You can install Fluentd via major packaging systems.
Let's start configuring Fluentd. If you used the deb/rpm package, Fluentd's config file is located at /etc/fluent/fluentd.conf
.
For the input source, we will set up Fluentd to track the recent Apache logs (typically found at /var/log/apache2/access_log
). The Fluentd configuration file should look like this:
Let's go through the configuration line by line:
type tail
: The tail Input plugin continuously tracks the log file.
This handy plugin is included in Fluentd's core.
@type apache2
in <parse>
: Uses Fluentd's built-in Apache log parser.
path /var/log/apache2/access_log
: The location of the Apache log.
This may be different for your particular system.
tag s3.apache.access
: s3.apache.access
is used as the tag to route the
messages within Fluentd.
That's it! You should now be able to output a JSON-formatted data stream for Fluentd to process.
The output destination will be Amazon S3. The output configuration should look like this:
The match section specifies the regexp used to look for matching tags. If a matching tag is found in a log, then the config inside <match>...</match>
is used (i.e. the log is routed according to the config inside). In this example, the s3.apache.access
tag (generated by tail
) is always used.
To test the configuration, just ping the Apache server. This example uses the ab
(Apache Bench) program:
WARNING: By default, files are created on an hourly basis (around xx:10). This means that when you first import records using the plugin, no file is created immediately. The file will be created when the timekey
condition has been met. To change the output frequency, please modify the timekey
value. To write objects every minute, please use timekey 60
with smaller timekey_wait
like timekey_wait 10s
.
Fluentd + Amazon S3 makes real-time log archiving simple.
If is not installed yet, please install it manually.
See section how to install fluent-plugin-s3 on your environment.
Please make sure that your Apache outputs are in the default combined format. format apache2
cannot parse custom log formats. Please see the article for more information.
Then, log into your and look at your bucket.
If this article is incorrect or outdated, or omits critical information, please . is an open-source project under . All components are available under the Apache 2 License.