The following document focuses on how to deploy Fluentd in Kubernetes and extend the possibilities to have different destinations for your logs.
The following document assumes that you have a Kubernetes cluster running or at least a local (single) node that can be used for testing purposes.
Before getting started, make sure you understand or have a basic idea about the following concepts from Kubernetes:
- NodeA node is a worker machine in Kubernetes, previously known as a minion. A node may be a VM or physical machine, depending on the cluster. Each node has the services necessary to run pods and is managed by the master components...
- PodA pod (as in a pod of whales or pea pod) is a group of one or more containers (such as Docker containers), the shared storage for those containers, and options about how to run the containers. Pods are always co-located and co-scheduled, and run in a shared context...
- A DaemonSet ensures that all (or some) nodes run a copy of a pod. As nodes are added to the cluster, pods are added to them. As nodes are removed from the cluster, those pods are garbage collected. Deleting a DaemonSet will clean up the pods it created...
Fluentd is flexible enough and has proper plugins to distribute logs to different third party applications like databases or cloud services, so the principal question is: where will the logs be stored? Once we answer this question, we can move forward to configuring our DaemonSet.
The below steps will focus on sending the logs to an Elasticsearch Pod.
We have created a Fluentd DaemonSet that has proper rules and container image ready to get started:
Please grab a copy of the repository from the command line using GIT:
$ git clone https://github.com/fluent/fluentd-kubernetes-daemonset
The cloned repository contains several configurations that allow to deploy Fluentd as a DaemonSet, the Docker container image distributed on the repository also comes pre-configured so Fluentd can gather all logs from the Kubernetes node environment and also it appends the proper metadata to the logs.
This repository has several presets for alpine/debian with popular outputs.
From the fluentd-kubernetes-daemonset/ directory, find the Yaml configuration file:
As an example let's see a part of the file content:
- name: fluentd
- name: FLUENT_ELASTICSEARCH_HOST
- name: FLUENT_ELASTICSEARCH_PORT
The Yaml file has two relevant environment variables that are used by Fluentd when the container starts:
Any relevant change needs to be done to the Yaml file before the deployment. Using the default values assumes that at least one Elasticsearch Pod elasticsearch-logging exists in the cluster.