Kubernetes Logging with Filebeat and Elasticsearch Part 2

January 9, 2020

Table of Contents

1. Introduction 

2. Deployment Architecture

          2.1 Creating Filebeat ServiceAccount and ClusterRole

          2.2 Creating Filebeat ConfigMap

          2.3 Deploying Filebeat DaemonSet

3. Conclusion 


1. Introduction

In this tutorial we will learn about configuring Filebeat to run as a DaemonSet in our Kubernetes cluster in order to ship logs to the Elasticsearch backend. We are using Filebeat instead of FluentD or FluentBit because it is an extremely lightweight utility and has a first class support for Kubernetes. It is best for production level setups. This blog post is the second in a two-part series. The first post runs through the deployment architecture for the nodes and deploying Kibana and ES-HQ.

 

2. Deployment Architecture

Filebeat will run as a DaemonSet in our Kubernetes cluster. It will be:

  • Deployed in a separate namespace called Logging.
  • Pods will be scheduled on both Master nodes and Worker Nodes. 
  • Master Node pods will forward api-server logs for audit and cluster administration purposes. 
  • Client Node pods will forward workload related logs for application observability.


2.1 Creating Filebeat ServiceAccount and ClusterRole

Deploy the following manifest to create the required permissions for Filebeat pods.

<p>CODE:https://gist.github.com/denshirenji/c93c9a85d96ca0d6bb7f13e4bb295e10.js</p>


We should make sure that ClusterRole permissions are as limited as possible from the security point of view. If either of the pods associated with this service account gets compromised then the attacker would not be able to gain access to the entire cluster or applications running in it. 

2.2 Creating Filebeat ConfigMap

Use the following manifest to create a ConfigMap which will be used by Filebeat pods. 

<p>CODE:https://gist.github.com/denshirenji/f2593e3c3c1f19a2b800ab10bfe18a94.js</p>


Important concepts for the Filebeat ConfigMap:

  • hints.enabled: This activates Filebeat’s hints module for Kubernetes. By using this we can use pod annotations to pass config directly to Filebeat pod. We can specify different multiline patterns and various other types of config. More about this can be read here.
  • include_annotations: Setting this to true enables Filebeat to retain any pod annotation for a particular log entry. These annotations can be later used to filter logs in the Kibana console. 
  • include_labels: Setting this to true enables Filebeat to retain any pod labels for a particular log entry. These labels can be later used to filter logs in the Kibana console.
  • We can also filter logs for a particular namespace and then can process the log entries accordingly. Here docker log processor is used. We can also use different multiline patterns for different namespaces. 
  • The output is set to Elasticsearch because we are using Elasticsearch as the storage backend. Alternatively, this can also point to Redis, Logstash, Kafka or even a File. More about this can be read here
  • Cloud metadata processor includes some host specific fields in the log entry. This is helpful when we try to filter logs specific to a particular worker node. 


2.3 Deploying Filebeat DaemonSet

Use the manifest below to deploy the Filebeat DaemonSet. 


<p>CODE:https://gist.github.com/denshirenji/48485ee81543678c0df8335d2e2ed732.js</p>


Let’s see what is going on here:

  • Logs for each pod are written to /var/log/docker/containers. We are mounting this directory from the host to the Filebeat pod and then Filebeat processes the logs according to the provided configuration. 
  • We have set the env var ELASTICSEARCH_HOST to elasticsearch.elasticsearch to refer to the Elasticsearch client service which was created in part 1 of this article. In case you already have an Elasticsearch cluster running the env var should be set to point to it. 
  • Please note the following setting in the manifest:

<p>CODE:https://gist.github.com/denshirenji/557e6c820c85a32028fe953ce94353bb.js</p>

This makes sure that our Filebeat DaemonSet schedules a pod on the master node as well. Once the Filebeat DaemonSet is deployed we can check if our pods get scheduled properly. 

<p>CODE:https://gist.github.com/denshirenji/74a6a22a0607054ba7c5697b9e7adb33.js</p>

If we tail the logs for one of the pods we can clearly see that it connected to Elasticsearch and has started harvester for the files. The snippet below shows this:

<p>CODE:https://gist.github.com/denshirenji/c5fb964f3f72d3488d62472ade04c8e9.js</p>

Once we have all our pods running, then we can create an index pattern of the type filebeat-* in Kibana. Filebeat indexes are generally timestamped. As soon as we create the index pattern all the searchable available fields can be seen and should be imported. Lastly, we can search through our application logs and create dashboards if needed. It is highly recommended to use JSON logger in our applications because it makes log processing extremely easy and messages can be parsed easily. 


4. Conclusion

This concludes our logging setup. All of the provided configuration files have been tested in production environments and are readily deployable. Feel free to reach out should you have any questions around it. While Elasticsearch dominates the logs monitoring space, MetricFire is best for monitoring time-series data. Try out the MetricFire product with our free trial and start monitoring your time-series data, or book a demo and talk to us directly about the monitoring solution that works for you.

This article was written by our guest blogger Vaibhav Thakur. If you liked this article, check out his LinkedIn for more.


Related Posts

GET FREE MONITORING FOR 14 DAYS