Install Logstash 7 on Fedora 30/Fedora 29/CentOS 7

|
Last Updated:
|
|

This guide will focus on how to install Logstash 7 on Fedora 30/Fedora 29/CentOS 7 as a continuation of our guide on how to setup Elastic Stack 7 on Fedora 30/Fedora 29/CentOS 7.

The installation of the first two components of ELastic Stack, Elasticsearch and Kibana have been discussed in our previous guides;

Install Elasticsearch 7 on Fedora 30

Install Elasticsearch 7.x on CentOS 7/Fedora 29

Install Kibana 7 on Fedora 30/Fedora 29/CentOS 7

Once you have Elasticsearch and Kibana installed, proceed to install Logstash.

Installing Logstash 7 on Fedora 30/Fedora 29/CentOS 7

Prerequisites

As a prerequisite, Logstash requires Java 8 or Java 11. You can install Java 8 on Fedora 30/Fedora 29/CentOS 7 by running the command below;

yum install java-1.8.0-openjdk.x86_64

Once the installation is done, you can verify the version as in below;

java -version
openjdk version "1.8.0_212"
OpenJDK Runtime Environment (build 1.8.0_212-b04)
OpenJDK 64-Bit Server VM (build 25.212-b04, mixed mode)

If you need to use Java 11, install it as shown below;

dnf install java-11-openjdk.x86_64

Installing Logstash 7 on Fedora 30/Fedora 29/CentOS 7

As stated before, this is a continuation of our guide on how to setup Elastic Stack on Fedora 30/Fedora 29/CentOS 7. Therefore, we have already created the Elastic Stack repos in our servers.You can however create Elastic 7.x repos by executing the command below;

  • Import Elastic Repo GPG signing key
rpm --import https://artifacts.elastic.co/GPG-KEY-elasticsearch
  • Create Elastic 7.x Repo
cat > /etc/yum.repos.d/elastic-7.x.repo << EOF
[elasticsearch-7.x]
name=Elasticsearch repository for 7.x packages
baseurl=https://artifacts.elastic.co/packages/7.x/yum
gpgcheck=1
gpgkey=https://artifacts.elastic.co/GPG-KEY-elasticsearch
enabled=1
autorefresh=1
type=rpm-md
EOF

And now you can just install Logstash using the YUM/DNF package manager.

yum install logstash

Testing Logstash

To test your Logstash installation, run the most basic Logstash pipeline.

cd /usr/share/logstash/bin/
./logstash -e 'input { stdin { } } output { stdout {} }'

Once you see the, Pipeline main started, type any string and press ENTER.

...
[INFO ] 2019-06-29 15:12:32.023 [Ruby-0-Thread-1: /usr/share/logstash/lib/bootstrap/environment.rb:6] agent - Pipelines running {:count=>1, :running_pipelines=>[:main], :non_running_pipelines=>[]}
[INFO ] 2019-06-29 15:12:32.821 [Api Webserver] agent - Successfully started Logstash API endpoint {:port=>9600}
Hello world
...

Logstash adds timestamp and host address information to the message.

...
{
          "host" => "elastic.example.com",
      "@version" => "1",
       "message" => "Hello world",
    "@timestamp" => 2019-06-29T12:13:06.994Z
}
...

Stop Logstash by pressing Ctrl+D.

Configuring Logstash 7 on Fedora 30/Fedora 29/CentOS 7

Once the installation is done, proceed to configure Logstash. Logstash data processing pipeline has three sections;

  • INPUT: input section is used to ingest data from different endpoints into Logstash.
  • FILTERS: which processes and transform the data received.
  • OUTPUT: which stashes processed data into a specified destination, which can be Elasticsearch.

You can read more about Logstash Pipeline here.

While configuring Logstash, you can have separate configuration files each for INPUT, FILTERS and OUTPUT. You can as well have single configuration file for all the sections. This guides uses separate configuration files.

Configure Logstash Input plugin

Create Logstash input configuration file. In this guide, Beats are used as the data shippers. Hence, to configure Logstash to receive data from Beats on TCP port 5044, create an input configuration file say, /etc/logstash/conf.d/beats-input.conf, with the content below;

vim /etc/logstash/conf.d/beats-input.conf
input {
  beats {
    port => 5044
  }
}

Configure Logstash Filters

Configure a filter plugin to process events received from the beats. This guide uses grok filter plugin. You can read about other plugins here.

For demonstration purposes, we are going to configure beats to collect SSH authentication events from Ubuntu/CentOS systems. Hence, we are going to create a filter to process such kind of events as shown below.

vim /etc/logstash/conf.d/ssh-auth-filter.conf

The grok pattern used in this example matches the ssh authetication log lines below;

Jun 29 13:19:13 fedora29 sshd[2764]: Failed password for root from 192.168.43.17 port 40284 ssh2
Jun 29 13:13:31 fedora29 sshd[2598]: Accepted password for root from 192.168.43.17 port 40182 ssh2
filter {
  grok {
    match => { "message" => "%{SYSLOGTIMESTAMP:timestamp}\s+%{IPORHOST:dst_host}\s+%{WORD:syslog_program}\[\d+\]:\s+(?<status>\w+\s+password)\s+for\s+%{USER:auth_user}\s+from\s+%{SYSLOGHOST:src_host}.*" }
    add_field => { "activity" => "SSH Logins" }
    add_tag => "linux_auth"
      }
    }

The lines, if [fileset][module] == “system”, if [fileset][name] == “auth” would be used to specify to ask Logstash to apply Grok filters on the events sent by this module. However, I used these and my Grok Pattern failed to extract data fields. In case you have an idea around this, drop it in comments.

Kibana 7 comes bundled with Grok Debugger which is similar to herokuapp grokdebugger. You can access Kibana Grok debugger under Dev Tools > Grok Debugger. You can utilize this to generate the correct grok patterns. You can as well check common logstash grok patterns here.

Configure Logstash Output

There are different output plugins that enables Logstash to sent event data to particular destinations. This guide uses elasticsearch that enables Logstash to sent data to Elasticsearch.

Create Logstash output configuration file with the content below. This confguration sents data to Elasticsearch running on the same host.

The index defines the index to write events to, logstash-%{+YYYY.MM.dd} is the default index.

If Elasticsearch is listening on non-loopback interface, replace localhost, hosts => [“localhost:9200″] with an interface IP, for example; hosts => [“192.168.0.101:9200″]

vim /etc/logstash/conf.d/elasticsearch-output.conf
output {
elasticsearch {
hosts => ["192.168.0.101:9200"]
manage_template => false
index => "ssh_auth-%{+YYYY.MM}"
}
}

All in one Logstash configuration file

So far we have used different configuration file for each Logstash section.

If you need to put them in one file, then create a configuration file as shown below;

vim /etc/logstash/conf.d/ssh-authentication.conf
input {
  beats {
    port => 5044
  }
}
filter {
  grok {
    match => { "message" => "%{SYSLOGTIMESTAMP:timestamp}\s+%{IPORHOST:dst_host}\s+%{WORD:syslog_program}\[\d+\]:\s+(?<status>\w+\s+password)\s+for\s+%{USER:auth_user}\s+from\s+%{SYSLOGHOST:src_host}.*" }
    add_field => { "activity" => "SSH Logins" }
    add_tag => "linux_auth"
    }
}
output {
   elasticsearch {
     hosts => ["localhost:9200"]
     manage_template => false
     index => "ssh_auth-%{+YYYY.MM}"
 }
}

If you need to sent the event data to standard output as well for the purposes of debugging plugin configurations, then you would add the line, stdout { codec => rubydebug } to the output configuration section.

output {
  elasticsearch {
    hosts => ["localhost:9200"]
    index => "ssh_auth-%{+YYYY.MM}"
}
  stdout { codec => rubydebug }
}

You can also check sample Logstash pipelines here.

Learn how to debug Logstash Grok Filters by following the link below;

How to Debug Logstash Grok Filters

Test Logstash Configuration

Once you are done with configurations, run the command below to verify the Logstash configuration before you can start it.

sudo -u logstash /usr/share/logstash/bin/logstash --path.settings /etc/logstash -t
Sending Logstash logs to /var/log/logstash which is now configured via log4j2.properties
Configuration OK
...

Well, if you get Configuration OK then you are good to go.

To run Logstash and load a specific configuration file for debugging, you can execute the command below;

sudo -u logstash /usr/share/logstash/bin/logstash -f /etc/logstash/conf.d/configuration-file.conf --path.settings /etc/logstash/

Running Logstash

You can now start and enable Logstash to run on system boot.

systemctl start logstash
systemctl enable logstash

On CentOS 7, if you try to start Logstash and get the error, Unit logstash.service could not be found, run the command below to generate systemd unit file.

/usr/share/logstash/bin/system-install /etc/logstash/startup.options systemd

To check the /var/log/logstash/logstash-plain.log log file for any logstash configuration errors.

If Firewalld is running and you want to receive event data from remote systems, ensure that TCP 5044.

firewall-cmd --add-port=5044/tcp --permanent
firewall-cmd --reload

Once you are done with configuration, proceed to install and configure Filebeat data shippers. See our next guide on how to install Filebeats on Fedora 30/Fedora 29/CentOS 7.

Install Filebeat on Fedora 30/Fedora 29/CentOS 7

That is all on how to install and configure Logstash 7 on on Fedora 30/Fedora 29/CentOS 7.

Reference:

Getting Started with Logstash

Related Tutorials;

Install and Configure Logstash 7 on Ubuntu 18/Debian 9.8

Install and Configure Filebeat 7 on Ubuntu 18.04/Debian 9.8

Install Elastic Stack 7 on Ubuntu 18.04/Debian 9.8

Install Elasticsearch 7.x on Ubuntu 18.04/Debian 9.8

SUPPORT US VIA A VIRTUAL CUP OF COFFEE

We're passionate about sharing our knowledge and experiences with you through our blog. If you appreciate our efforts, consider buying us a virtual coffee. Your support keeps us motivated and enables us to continually improve, ensuring that we can provide you with the best content possible. Thank you for being a coffee-fueled champion of our work!

Photo of author
koromicha
I am the Co-founder of Kifarunix.com, Linux and the whole FOSS enthusiast, Linux System Admin and a Blue Teamer who loves to share technological tips and hacks with others as a way of sharing knowledge as: "In vain have you acquired knowledge if you have not imparted it to others".

Leave a Comment