Fluentd
Search…
Send Syslog Data to InfluxDB
This article shows how to collect syslog data into InfluxDB using Fluentd.
Syslog + Fluentd + InfluxDB

Prerequisites

    A basic understanding of Fluentd
    A running instance of rsyslogd
In this guide, we assume we are running td-agent (Fluentd package for Linux and macOS) on Ubuntu Xenial.

Step 1: Install InfluxDB

InfluxDB supports Ubuntu, RedHat and macOS (via brew).
For more details, see here.
Since we are assumed to be on Ubuntu, the following two lines install InfluxDB:
1
$ wget https://dl.influxdata.com/influxdb/releases/influxdb_1.7.3_amd64.deb
2
$ sudo dpkg -i influxdb_1.7.3_amd64.deb
Copied!
Once it is installed, you can run it with:
1
$ sudo systemctl start influxdb
Copied!
Then, you can verify that InfluxDB is running:
1
$ curl "http://localhost:8086/query?q=show+databases"
Copied!
If InfluxDB is running normally, you will see an object that contains the _internal database:
1
{"results":[{"statement_id":0,"series":[{"name":"databases","columns":["name"],"values":[["_internal"]]}]}]}
Copied!
Also, the following two lines install Chronograf:
1
$ wget https://dl.influxdata.com/chronograf/releases/chronograf_1.7.7_amd64.deb
2
$ sudo dpkg -i chronograf_1.7.7_amd64.deb
Copied!
Once it is installed, you can run it with:
1
$ sudo systemctl start chronograf
Copied!
Then, go to localhost:8888 (or wherever you are hosting Chronograf) to access Chronograf's web console which is the successor to InfluxDB's web console.
Create a database called test. This is where we will be storing syslog data:
1
$ curl -i -XPOST http://localhost:8086/query --data-urlencode "q=CREATE DATABASE test"
Copied!
If you prefer command line or cannot access port 8083 from your local machine, running the following command creates a database called test:
1
$ curl -i -X POST 'http://localhost:8086/write?db=test' --data-binary 'task,host=server01,region=us-west value=1 1434055562000000000'
Copied!
We are done for now.

Step 2: Install Fluentd and the InfluxDB plugin

On your aggregator server, set up Fluentd.
For more details, see here.
1
$ curl -L https://toolbelt.treasuredata.com/sh/install-ubuntu-xenial-td-agent3.sh | sh
Copied!
Next, install the InfluxDB output plugin:
1
/usr/sbin/td-agent-gem install fluent-plugin-influxdb
Copied!
For the vanilla Fluentd, run:
1
fluent-gem install fluent-plugin-influxdb
Copied!
You might need sudo to install the plugin.
Finally, configure /etc/td-agent/td-agent.conf as follows:
1
<source>
2
@type syslog
3
port 42185
4
tag system
5
</source>
6
7
<match system.*.*>
8
@type influxdb
9
dbname test
10
flush_interval 10s # for testing
11
host YOUR_INFLUXDB_HOST # default: localhost
12
port YOUR_INFLUXDB_PORT # default: 8086
13
</match>
Copied!
Restart td-agent with sudo service td-agent restart.

Step 3: Configure rsyslogd

If remote rsyslogd instances are already collecting data into the aggregator rsyslogd, the settings for rsyslog should remain unchanged. However, if this is a brand new setup, start forward syslog output by adding the following line to /etc/rsyslogd.conf:
1
*.* @182.39.20.2:42185
Copied!
You should replace 182.39.20.2 with the IP address of your aggregator server. Also, there is nothing special about port 42185 (do make sure this port is open though).
Now, restart rsyslogd:
1
$ sudo systemctl restart rsyslog
Copied!

Step 4: Confirm Data Flow

Your syslog data should be flowing into InfluxDB every 10 seconds (this is configured by flush_interval).
Clicking on Explore brings up the query interface that lets you write SQL queries against your log data.
And then click Visualization and select the line chart:
Chronograf: Explore Data
Now, to count the number of lines of syslog messages per facility/priority:
1
SELECT COUNT(ident) FROM test.autogen./^system\./ GROUP BY time(1s)
Copied!
Click on Submit Query to get a graph like this:
Chronograf: Query
Here is another screenshot for the system.daemon.info series:
Chronograf: Query
If this article is incorrect or outdated, or omits critical information, please let us know. Fluentd is an open-source project under Cloud Native Computing Foundation (CNCF). All components are available under the Apache 2 License.
Last modified 19d ago