How to run Multiple Filebeat Instances in Linux

|
Last Updated:
|
|

In this tutorial, you will learn how to run multiple filebeat instances in Linux system. Filebeat is one of the Elastic beat and is a lightweight shipper for collecting, forwarding and centralizing event log data. It is installed as an agent on the servers you are collecting logs from. It can forward the logs it is collecting to either Elasticsearch or Logstash for direct indexing or for further processing before being sent to Elasticsearch for indexing respectively.

Running Multiple Filebeat Instances in Linux

Install Filebeat on Linux Systems

In our previous tutorials, we have discussed how to install single instance Filebeat.

Install and Configure Filebeat on Ubuntu 20.04

Install and Configure Filebeat on CentOS 8

Install Filebeat on Fedora 30/Fedora 29/CentOS 7

Install and Configure Filebeat 7 on Ubuntu 18.04/Debian 9.8

In all the above tutorials, we have had a single instance of Filebeat running on a system and either sending log data directly to Elasticsearch or to Logstash for further processing before being sent to Elasticsearch.

There comes a times when you may want to sent system logs to multiple outputs; for example sent some logs directly to Elasticsearch while at the same time sending some other logs directly to Logstash for processing.

Thus, in this tutorial, let us see how possible it is to install and run multiple filebeat instances in Linux system in order to be able to sent the data into multiple outputs.

Run Multiple Filebeat Instances in Linux

There are multiple ways in which you can install and run multiple filebeat instances in Linux. Some of these include;

Run Multiple Filebeat Instances in Linux using Filebeat-god

Go daemon (or just god) is a utility that is used to “daemonize” Go programs that originally only run in foreground and write logs to the console. Filebeat-god (Filebeat Go daemon) is therefore a utility that is used to daemonize the Filebeat processes that would otherwise run on foreground.

Just to note that this guide has been tested with Ubuntu 20.04 and CentOS 8. It should however work on other systems as well, I guess, -:).

Installing Filebeat-god

Whenever Filebeat is installed, Filebeat-god is installed along with it and usually resides in same place as the Filebeat binary under /usr/share/filebeat/bin directory.

To locate Filebeat-god binary, simply use find command as follows;

find / -iname filebeat-god
/usr/share/filebeat/bin/filebeat-god

Now, by default, Filebeat uses the following paths;

TypeDescriptionLocation
homeHome of the Filebeat installation./usr/share/filebeat
binThe location for the binary files./usr/share/filebeat/bin
configThe location for configuration files./etc/filebeat
dataThe location for persistent data files./var/lib/filebeat
logsThe location for the logs created by Filebeat./var/log/filebeat

Assuming, you would like to forward syslog and system authentication events directly to Elasticsearch using system module while you want to sent other custom logs to Logstash for further processing before sending to Elasticsearch, then you have to run the Filebeat processes using Filebeat-god, /usr/share/filebeat/bin/filebeat-god, as follow;

Create a copy of the Filebeat configuration file.

cp -r /etc/filebeat{,-elasticsearch}

This command creates an /etc/filebeat-org directory.

So we now have these FIlebeat configuration directories;

ls /etc/filebeat*
/etc/filebeat:
fields.yml  filebeat.reference.yml  filebeat.yml  modules.d

/etc/filebeat-elasticsearch:
fields.yml  filebeat.reference.yml  filebeat.yml  modules.d

Let us collect syslog and authentication logs using /etc/filebeat-elasticsearch configs and forward to Elasticsearch. Enabling system module;

cp /etc/filebeat-elasticsearch/modules.d/system.yml{.disabled,}
Configure Elasticsearch output
vim /etc/filebeat-elasticsearch/filebeat.yml
...
# ---------------------------- Elasticsearch Output ----------------------------
output.elasticsearch:
  # Array of hosts to connect to.
  hosts: ["192.168.57.30:9200"]
...

Save and exit the config.

Verify if connection to Elasticsearch can be established by running Filebeat in debug mode;

filebeat -e \
	--path.config /etc/filebeat-elasticsearch \
	--path.home /usr/share/filebeat \
	--path.data /var/lib/filebeat-elasticsearch

If all is well, you should see Elasticsearch connection being established.

...
2020-11-24T19:47:12.332Z	INFO	[index-management]	idxmgmt/std.go:298	Loaded index template.
2020-11-24T19:47:12.333Z	INFO	[index-management]	idxmgmt/std.go:309	Write alias successfully generated.
2020-11-24T19:47:12.333Z	INFO	[publisher_pipeline_output]	pipeline/output.go:151	Connection to backoff(elasticsearch(http://192.168.57.30:9200)) established

Next, stop Filebeat running in debug mode and start it as daemon using Filebeat-god as follows;

/usr/share/filebeat/bin/filebeat-god -n \
	-p /var/run/filebeat-elasticsearch.pid \
	-r / \
	-- /usr/share/filebeat/bin/filebeat \
	--path.config /etc/filebeat-elasticsearch \
	--path.home /usr/share/filebeat \
	--path.data /var/lib/filebeat-elasticsearch \
	--path.logs /var/log/filebeat-elasticsearch

Verify if Filebeat is now running as daemon;

ps aux | grep filebeat | grep -v grep
root      121377  0.0  0.0   9300   636 pts/1    S    20:14   0:00 /usr/share/filebeat/bin/filebeat-god -n -p /var/run/filebeat-elasticsearch.pid -r / -- /usr/share/filebeat/bin/filebeat --path.config /etc/filebeat-elasticsearch --path.home /usr/share/filebeat --path.data /var/lib/filebeat-elasticsearch --path.logs /var/log/filebeat-elasticsearch
root      121378  4.8  2.0 1609528 127224 pts/1  SLl  20:14   0:02 /usr/share/filebeat/bin/filebeat --path.config /etc/filebeat-elasticsearch --path.home /usr/share/filebeat --path.data /var/lib/filebeat-elasticsearch --path.logs /var/log/filebeat-elasticsearch

Tailing the logs;

tail -f /var/log/filebeat-elasticsearch/filebeat
...
2020-11-25T17:22:30.532Z	INFO	template/load.go:97	Template filebeat-7.10.0 already exists and will not be overwritten.
2020-11-25T17:22:30.532Z	INFO	[index-management]	idxmgmt/std.go:298	Loaded index template.
2020-11-25T17:22:30.533Z	INFO	[index-management]	idxmgmt/std.go:309	Write alias successfully generated.
2020-11-25T17:22:30.536Z	INFO	[publisher_pipeline_output]	pipeline/output.go:151	Connection to backoff(elasticsearch(http://192.168.57.30:9200)) established
Configure Logstash Output

Now, if you want to collect some other logs on the same system and send directly to Logstash for further processing instead of sending to Elasticsearch, simply create a separate Filebeat configuration file as follows;

cp -r /etc/filebeat{,-logstash}
vim /etc/filebeat-logstash/filebeat.yml

Specify the logs file to read and sent data to Logstash. In this example, we are collecting wordpress logs and forward to Logstash for processing.

# ============================== Filebeat inputs ===============================

filebeat.inputs:
- type: log

  # Change to true to enable this input configuration.
  enabled: true

  # Paths that should be crawled and fetched. Glob based paths.
  paths:
    - /var/log/wordpress/kifarunix-demo.com.log
...
...
# ================================== Outputs ===================================

# Configure what output to use when sending the data collected by the beat.

# ---------------------------- Elasticsearch Output ----------------------------
#output.elasticsearch:
  # Array of hosts to connect to.
  #hosts: ["192.168.57.30:9200"]
...
...
# ------------------------------ Logstash Output -------------------------------
output.logstash:
  # The Logstash hosts
  hosts: ["192.168.57.30:5044"]
...

Save and exit the configuration file.

Run Filebeat in debug mode using the custom Filebeat configuration for the logstash created above.

filebeat -e \
	--path.config /etc/filebeat-logstash/ \
	--path.home /usr/share/filebeat \
	--path.data /var/lib/filebeat-logstash
...
2020-11-25T18:06:30.298Z	INFO	log/harvester.go:302	Harvester started for file: /var/log/wordpress/kifarunix-demo.com.log
2020-11-25T18:06:33.300Z	INFO	[add_cloud_metadata]	add_cloud_metadata/add_cloud_metadata.go:89	add_cloud_metadata: hosting provider type not detected.
2020-11-25T18:06:34.302Z	INFO	[publisher_pipeline_output]	pipeline/output.go:143	Connecting to backoff(async(tcp://192.168.57.30:5044))
2020-11-25T18:06:34.303Z	INFO	[publisher]	pipeline/retry.go:219	retryer: send unwait signal to consumer
2020-11-25T18:06:34.304Z	INFO	[publisher]	pipeline/retry.go:223	  done
2020-11-25T18:06:34.305Z	INFO	[publisher_pipeline_output]	pipeline/output.go:151	Connection to backoff(async(tcp://192.168.57.30:5044)) established

If all is well, run Filebeat instance that sends data to Logstash as daemon;

/usr/share/filebeat/bin/filebeat-god -n \
	-p /var/run/filebeat-logstash.pid \
	-r / \
	-- /usr/share/filebeat/bin/filebeat \
	--path.config /etc/filebeat-logstash/ \
	--path.home /usr/share/filebeat \
	--path.data /var/lib/filebeat-logstash \
	--path.logs /var/log/filebeat-logstash

Similarly, you can check the process using ps command.

Run Multiple Filebeat Instances in Linux using Systemd

For Linux systems that uses Systemd initialization, there is no need even bothering to run multiple Filebeat instances in Linux using Filebeat-god.

Running multiple Filebeat instances in Linux using Systemd is as easy as follows.

Assuming you want to collect various logs and sent to various output, as in the example we used above, then all you need to do is to create a Systemd service unit for a Filebeat collecting logs on each specific log file.

An example to create a Filebeat to sent logs to Elasticsearch based on example configuration file above;

Copy the default FIlebeat systemd service unit renaming it accordingly.

cp /lib/systemd/system/filebeat.service /etc/systemd/system/filebeat-elasticsearch.service

Edit the configuration file for the custom service unit and set the appropriate directories

vim /etc/systemd/system/filebeat-elasticsearch.service
[Unit]
Description=Filebeat sends log files to directly to Elasticsearch.
Documentation=https://www.elastic.co/products/beats/filebeat
Wants=network-online.target
After=network-online.target

[Service]

Environment="BEAT_LOG_OPTS="
Environment="BEAT_CONFIG_OPTS=-c /etc/filebeat-elasticsearch/filebeat.yml"
Environment="BEAT_PATH_OPTS=--path.home /usr/share/filebeat --path.config /etc/filebeat-elasticsearch --path.data /var/lib/filebeat-elasticsearch --path.logs /var/log/filebeat-elasticsearch"
ExecStart=/usr/share/filebeat/bin/filebeat --environment systemd $BEAT_LOG_OPTS $BEAT_CONFIG_OPTS $BEAT_PATH_OPTS
Restart=always

[Install]
WantedBy=multi-user.target

Similarly, for sending to Logstash;

cp /lib/systemd/system/filebeat.service /etc/systemd/system/filebeat-logstash.service
vim /etc/systemd/system/filebeat-logstash.service
[Unit]
Description=Filebeat sends log files to Logstash.
Documentation=https://www.elastic.co/products/beats/filebeat
Wants=network-online.target
After=network-online.target

[Service]

Environment="BEAT_LOG_OPTS="
Environment="BEAT_CONFIG_OPTS=-c /etc/filebeat-logstash/filebeat.yml"
Environment="BEAT_PATH_OPTS=--path.home /usr/share/filebeat --path.config /etc/filebeat-logstash --path.data /var/lib/filebeat-logstash --path.logs /var/log/filebeat-logstash"
ExecStart=/usr/share/filebeat/bin/filebeat --environment systemd $BEAT_LOG_OPTS $BEAT_CONFIG_OPTS $BEAT_PATH_OPTS
Restart=always

[Install]
WantedBy=multi-user.target

Reload Systemd configurations;

systemctl daemon-reload

Start the two services;

systemctl start filebeat-logstash filebeat-elasticsearch

Check the status;

systemctl status filebeat-logstash filebeat-elasticsearch
● filebeat-logstash.service - Filebeat sends log files to Logstash.
     Loaded: loaded (/etc/systemd/system/filebeat-logstash.service; disabled; vendor preset: enabled)
     Active: active (running) since Wed 2020-11-25 19:29:20 UTC; 1min 21s ago
       Docs: https://www.elastic.co/products/beats/filebeat
   Main PID: 129521 (filebeat)
      Tasks: 9 (limit: 7031)
     Memory: 28.4M
     CGroup: /system.slice/filebeat-logstash.service
             └─129521 /usr/share/filebeat/bin/filebeat --environment systemd -c /etc/filebeat-logstash/filebeat.yml --path.home /usr/share/filebeat --path.config /etc/file>
...
● filebeat-elasticsearch.service - Filebeat sends log files to directly to Elasticsearch.
     Loaded: loaded (/etc/systemd/system/filebeat-elasticsearch.service; disabled; vendor preset: enabled)
     Active: active (running) since Wed 2020-11-25 19:29:20 UTC; 1min 21s ago
       Docs: https://www.elastic.co/products/beats/filebeat
   Main PID: 129540 (filebeat)
      Tasks: 9 (limit: 7031)
     Memory: 23.9M
     CGroup: /system.slice/filebeat-elasticsearch.service
             └─129540 /usr/share/filebeat/bin/filebeat --environment systemd -c /etc/filebeat-elasticsearch/filebeat.yml --path.home /usr/share/filebeat --path.config /et...

And there you go. That is all it takes to run multiple Filebeat instances in Linux.

Reference

Install and configure multiple filebeat in linux instance

Setup Kibana Elasticsearch and Fluentd on CentOS 8

Setup Multi-node Elasticsearch 7.x Cluster on Fedora 30/Fedora 29/CentOS 7

Install Elasticsearch 7 on Fedora 30

Install Elasticsearch 7.x on Ubuntu 18.04/Debian 9.8

SUPPORT US VIA A VIRTUAL CUP OF COFFEE

We're passionate about sharing our knowledge and experiences with you through our blog. If you appreciate our efforts, consider buying us a virtual coffee. Your support keeps us motivated and enables us to continually improve, ensuring that we can provide you with the best content possible. Thank you for being a coffee-fueled champion of our work!

Photo of author
gen_too
Co-founder of Kifarunix.com, Linux Tips and Tutorials. Linux/Unix admin and author at Kifarunix.com.

1 thought on “How to run Multiple Filebeat Instances in Linux”

  1. Just what I needed. I need to use filebeat to forward to different outputs because I’m pushing to logscale and have different indexer keys (meaning I need different outputs for each different log file format I’m sending). And filebeat only allows a single output.

    Thank you

    Reply

Leave a Comment