Fluentd
0.12
0.12
  • Introduction
  • Overview
    • Getting Started
    • Installation
    • Life of a Fluentd event
    • Support
    • FAQ
  • Use Cases
    • Centralized App Logging
    • Monitoring Service Logs
    • Data Analytics
    • Connecting to Data Storages
    • Stream Processing
    • Windows Event Collection
    • IoT Data Logger
  • Configuration
    • Config File Syntax
    • Routing Examples
    • Recipes
  • Deployment
    • Logging
    • Monitoring
    • Signals
    • RPC
    • High Availability Config
    • Failure Scenarios
    • Performance Tuning
    • Plugin Management
    • Trouble Shooting
    • Secure Forwarding
    • Fluentd UI
    • Command Line Option
  • Container Deployment
    • Docker Image
    • Docker Logging Driver
    • Docker Compose
    • Kubernetes
  • Input Plugins
    • tail
    • forward
    • secure_forward
    • udp
    • tcp
    • http
    • unix
    • syslog
    • exec
    • scribe
    • multiprocess
    • dummy
    • Others
  • Output Plugins
    • file
    • s3
    • kafka
    • forward
    • secure_forward
    • exec
    • exec_filter
    • copy
    • geoip
    • roundrobin
    • stdout
    • null
    • webhdfs
    • splunk
    • mongo
    • mongo_replset
    • relabel
    • rewrite_tag_filter
    • Others
  • Buffer Plugins
    • memory
    • file
  • Filter Plugins
    • record_transformer
    • grep
    • parser
    • stdout
  • Parser Plugins
    • regexp
    • apache2
    • apache_error
    • nginx
    • syslog
    • ltsv
    • csv
    • tsv
    • json
    • multiline
    • none
  • Formatter Plugins
    • out_file
    • json
    • ltsv
    • csv
    • msgpack
    • hash
    • single_value
  • Developer
    • Plugin Development
    • Community
    • Mailing List
    • Source Code
    • Bug Tracking
    • ChangeLog
    • Logo
  • Articles
    • Store Apache Logs into MongoDB
    • Apache To Riak
    • Store Apache Logs into Amazon S3
    • Before Install
    • Cep Norikra
    • Collect Glusterfs Logs
    • Common Log Formats
    • Docker Logging Efk Compose
    • Docker Logging
    • Filter Modify Apache
    • Forwarding Over Ssl
    • Free Alternative To Splunk By Fluentd
    • Data Collection to Hadoop (HDFS)
    • Data Analytics with Treasure Data
    • Install By Chef
    • Install By Deb
    • Install By Dmg
    • Install By Gem
    • Install By Rpm
    • Install From Source
    • Install On Beanstalk
    • Install On Heroku
    • Java
    • Kinesis Stream
    • Kubernetes Fluentd
    • Monitoring by Prometheus
    • Monitoring by Rest Api
    • Nodejs
    • Performance Tuning Multi Process
    • Performance Tuning Single Process
    • Perl
    • Php
    • Python
    • Quickstart
    • Raspberrypi Cloud Data Logger
    • Recipe Apache Logs To Elasticsearch
    • Recipe Apache Logs To Mongo
    • Recipe Apache Logs To S3
    • Recipe Apache Logs To Treasure Data
    • Recipe Cloudstack To Mongodb
    • Recipe Csv To Elasticsearch
    • Recipe Csv To Mongo
    • Recipe Csv To S3
    • Recipe Csv To Treasure Data
    • Recipe Http Rest Api To Elasticsearch
    • Recipe Http Rest Api To Mongo
    • Recipe Http Rest Api To S3
    • Recipe Http Rest Api To Treasure Data
    • Recipe Json To Elasticsearch
    • Recipe Json To Mongo
    • Recipe Json To S3
    • Recipe Json To Treasure Data
    • Recipe Nginx To Elasticsearch
    • Recipe Nginx To Mongo
    • Recipe Nginx To S3
    • Recipe Nginx To Treasure Data
    • Recipe Syslog To Elasticsearch
    • Recipe Syslog To Mongo
    • Recipe Syslog To S3
    • Recipe Syslog To Treasure Data
    • Recipe Tsv To Elasticsearch
    • Recipe Tsv To Mongo
    • Recipe Tsv To S3
    • Recipe Tsv To Treasure Data
    • Ruby
    • Scala
    • Splunk Like Grep And Alert Email
Powered by GitBook
On this page
  • Background
  • Mechanism
  • Install
  • Configuration
  • Tail Input
  • MongoDB Output
  • Test
  • Conclusion
  • Learn More

Was this helpful?

  1. Articles

Store Apache Logs into MongoDB

PreviousArticlesNextApache To Riak

Last updated 5 years ago

Was this helpful?

This article explains how to use 's MongoDB Output plugin (out_mongo) to aggregate semi-structured logs in real-time.

Background

is an advanced open-source log collector originally developed at . Because Fluentd handles logs as semi-structured data streams, the ideal database should have strong support for semi-structured data. There are several candidates that meet this criterion, but we believe is the market leader.

MongoDB is an open-source, document-oriented database developed at . It is schema-free and uses a JSON-like format to manage semi-structured data.

This article will show you how to use to import Apache logs into MongoDB.

Mechanism

The figure below shows how things will work.

Fluentd does 3 things:

  1. It continuously "tails" the access log file.

  2. It parses the incoming log entries into meaningful fields (such as

    ip, path, etc.) and buffers them.

  3. It writes the buffered data to MongoDB periodically.

Install

For simplicity, this article will describe how to set up an one-node configuration. Please install the following software on the same node.

  • MongoDB Output Plugin

  • Apache (with the Combined Log Format)

The MongoDB Output plugin is included in the latest version of Fluentd's deb/rpm package. If you want to use Ruby Gems to install the plugin, please use gem install fluent-plugin-mongo.

For MongoDB, please refer to the following downloads page.

Configuration

Let's start configuring Fluentd. If you used the deb/rpm package, Fluentd's config file is located at /etc/td-agent/td-agent.conf. Otherwise, it is located at /etc/fluentd/fluentd.conf.

Tail Input

For the input source, we will set up Fluentd to track the recent Apache logs (typically found at /var/log/apache2/access_log) The Fluentd configuration file should look like this:

<source>
  @type tail
  format apache2
  path /var/log/apache2/access_log
  pos_file /var/log/td-agent/apache2.access_log.pos
  tag mongo.apache.access
</source>

Please make sure that your Apache outputs are in the default \'combined\' format. `format apache2` cannot parse custom log formats. Please see the in_tail article for more information.

Let's go through the configuration line by line.

  1. type tail: The tail Input plugin continuously tracks the log file.

    This handy plugin is included in Fluentd's core.

  2. format apache2: Uses Fluentd's built-in Apache log parser.

  3. path /var/log/apache2/access_log: The location of the Apache log.

    This may be different for your particular system.

  4. tag mongo.apache.access: mongo.apache.access is used as the tag

    to route the messages within Fluentd.

That's it! You should now be able to output a JSON-formatted data stream for Fluentd to process.

MongoDB Output

The output destination will be MongoDB. The output configuration should look like this:

<match mongo.**>
  # plugin type
  @type mongo

  # mongodb db + collection
  database apache
  collection access

  # mongodb host + port
  host localhost
  port 27017

  # interval
  flush_interval 10s

  # make sure to include the time key
  include_time_key true
</match>

The match section specifies the regexp used to look for matching tags. If a matching tag is found in a log, then the config inside <match>...</match> is used (i.e. the log is routed according to the config inside). In this example, the mongo.apache.access tag (generated by tail) is always used.

The ** in mongo.** matches zero or more period-delimited tag parts (e.g. mongo/mongo.a/mongo.a.b).

flush_interval specifies how often the data is written to MongoDB. The other options specify MongoDB's host, port, db, and collection.

For additional configuration parameters, please see the MongoDB Output plugin article. If you are using ReplicaSet, please see the MongoDB ReplicaSet Output plugin article.

Test

To test the configuration, just ping the Apache server. This example uses the ab (Apache Bench) program.

$ ab -n 100 -c 10 http://localhost/

Then, access MongoDB and see the stored data.

$ mongo
> use apache
> db["access"].findOne();
{ "_id" : ObjectId("4ed1ed3a340765ce73000001"), "host" : "127.0.0.1", "user" : "-", "method" : "GET", "path" : "/", "code" : "200", "size" : "44", "time" : ISODate("2011-11-27T07:56:27Z") }
{ "_id" : ObjectId("4ed1ed3a340765ce73000002"), "host" : "127.0.0.1", "user" : "-", "method" : "GET", "path" : "/", "code" : "200", "size" : "44", "time" : ISODate("2011-11-27T07:56:34Z") }
{ "_id" : ObjectId("4ed1ed3a340765ce73000003"), "host" : "127.0.0.1", "user" : "-", "method" : "GET", "path" : "/", "code" : "200", "size" : "44", "time" : ISODate("2011-11-27T07:56:34Z") }

Conclusion

Fluentd + MongoDB makes real-time log collection simple, easy, and robust.

Learn More

  • Fluentd Get Started

  • MongoDB Output Plugin

  • MongoDB ReplicaSet Output Plugin

If this article is incorrect or outdated, or omits critical information, please . is a open source project under . All components are available under the Apache 2 License.

Fluentd
MongoDB
Debian Package
RPM Package
Ruby gem
MongoDB Downloads
Fluentd Architecture
let us know
Fluentd
Cloud Native Computing Foundation (CNCF)
Fluentd
Fluentd
Treasure Data, Inc
MongoDB
10gen, Inc
Fluentd