Install and Configure Logstash 7 on Ubuntu 18/Debian 9.8

0
3064

This guide will focus on how to install and configure Logstash 7 on Ubuntu 18.04/Debian 9.8 as a continuation of our guide on how to setup Elastic Stack 7 on Ubuntu 18.04/Debian 9.8. We have already covered the installation of Elasticsearch and Kibana.

Install Elasticsearch 7.x on Ubuntu 18.04/Debian 9.8

Install Kibana 7 on Ubuntu 18.04/Debian 9.8

According to the installation order, Logstash is the third component in the line. Note that for Elastic Stack to function well, versions of all the components must match.

Install and Configure Logstash 7 on Ubuntu 18/Debian 9.8

Prerequisites

As a prerequisite, Logstash requires Java 8 or Java 11. However, according to Elastic.co, issues with Debian or RPM install packages and Java 11. We are investigating and tracking in Issue #10593. Therefore, we will be using Java 8 in this guide.

java -version
openjdk version "1.8.0_191"
OpenJDK Runtime Environment (build 1.8.0_191-8u191-b12-2ubuntu0.18.04.1-b12)
OpenJDK 64-Bit Server VM (build 25.191-b12, mixed mode)

If Java is not installed yet, you can check how to install Java 8 on Ubuntu 18.04/Debian 9.8 by following the link below;

Install Java 8 on Ubuntu 18.04

Install Logstash

As stated before, this is a continuation of our guide on how to setup Elastic Stack on Ubuntu/Debian servers. Therefore, we have already created the Elastic Stack repos in our servers and thus we can just install Logstash using the APT package manager.

apt install logstash

Configuring Logstash

Once the installation is done, proceed to configure Logstash. Logstash data processing pipeline has three sections;

  • INPUT: input section is used to ingest data from different endpoints into Logstash.
  • FILTERS: which processes and transform the data received.
  • OUTPUT: which stashes processed data into a specified destination, which can be Elasticsearch.

You can read more about Logstash Pipeline here.

While configuring Logstash, you can have separate configuration files each for INPUT, FILTERS and OUTPUT. You can as well have single configuration file for all the sections. This guides uses separate configuration files.

Configure Logstash Input plugin

To kick off with, create a configuration file to define how data will be ingested into Logstash. For example, to configure Logstash to receive data from Beats on TCP port 5044, create an input configuration file say, /etc/logstash/conf.d/beats-input.conf, with the content below;

vim /etc/logstash/conf.d/beats-input.conf
input {
  beats {
    port => 5044
  }
}

Configure Logstash Filters

Now that we have defined an Logstash Input plugin as Beats. Hence, proceed to configure a filter plugin to process events received from the beats. This guide uses grok filter plugin. You can read about other plugins here.

For demonstration purposes, we are going to configure beats to collect SSH authentication events from Ubuntu/CentOS systems. Hence, we are going to create a filter to process such kind of events as shown below.

vim /etc/logstash/conf.d/ssh-auth-filter.conf

The grok pattern used in this example matches the ssh authetication log lines below;

May  1 13:15:23 elk sshd[1387]: Failed password for testuser from 192.168.0.102 port 60004 ssh2
May 1 13:08:30 elk sshd[1338]: Accepted password for testuser from 192.168.0.102 port 59958 ssh2
filter {
  grok {
    match => { "message" => "%{SYSLOGTIMESTAMP:timestamp}\s+%{IPORHOST:dst_host}\s+%{WORD:syslog_program}\[\d+\]:\s+(?<status>\w+\s+password)\s+for\s+%{USER:auth_user}\s+from\s+%{SYSLOGHOST:src_host}.*" }
    add_field => { "activity" => "SSH Logins" }
    add_tag => "linux_auth"
    }
}

Kibana 7 comes bundled with Grok Debugger which is similar to herokuapp grokdebugger. You can access Kibana Grok debugger under Dev Tools > Grok Debugger. You can utilize this to generate the correct grok patterns. You can as well check common logstash grok patterns here.

Also for the purposes of making demo a simple, we will add a filter to drop all the events that do not match our grok filter for SSH authentication events used above;

if "_grokparsefailure" in [tags] { drop {} }

Such that our filter looks like;

filter {
  grok {
    match => { "message" => "%{SYSLOGTIMESTAMP:timestamp}\s+%{IPORHOST:dst_host}\s+%{WORD:syslog_program}\[\d+\]:\s+(?\w+\s+password)\s+for\s+%{USER:auth_user}\s+from\s+%{SYSLOGHOST:src_host}.*" }
    add_field => { "activity" => "SSH Logins" }
    add_tag => "linux_auth"
    }
  if "_grokparsefailure" in [tags] { drop {} }
}

Configure Logstash Output

There are different output plugins that enables Logstash to sent event data to particular destinations. This guide uses elasticsearch that enables Logstash to sent data to Elasticsearch.

Create Logstash output configuration file with the content below. This confguration sents data to Elasticsearch running on a localhost. The index defines the index to write events to, logstash-%{+YYYY.MM.dd} is the default index.

If Elasticsearch is listening on non-loopback interface, replace localhost, hosts => [“localhost:9200″] with an interface IP, for example; hosts => [“192.168.0.101:9200″]

vim /etc/logstash/conf.d/elasticsearch-output.conf
output {
elasticsearch {
hosts => ["192.168.0.101:9200"]
manage_template => false
index => "ssh_auth-%{+YYYY.MM}"
}
}

So far we have used different configuration file for each Logstash section. If you need to put them in one file, then create a configuration file as shown below;

vim /etc/logstash/conf.d/ssh-authentication.conf
input {
  beats {
    port => 5044
  }
}
filter {
  grok {
    match => { "message" => "%{SYSLOGTIMESTAMP:timestamp}\s+%{IPORHOST:dst_host}\s+%{WORD:syslog_program}\[\d+\]:\s+(?<status>\w+\s+password)\s+for\s+%{USER:auth_user}\s+from\s+%{SYSLOGHOST:src_host}.*" }
    add_field => { "activity" => "SSH Logins" }
    add_tag => "linux_auth"
    }
  if "_grokparsefailure" in [tags] { drop {} }
}
output {
  elasticsearch {
    hosts => ["localhost:9200"]
    index => "ssh_auth-%{+YYYY.MM}"
}
}

If you need to sent the event data to standard output as well for the purposes of debugging plugin configurations, then you would add the line, stdout { codec => rubydebug } to the output configuration section.

output {
  elasticsearch {
    hosts => ["localhost:9200"]
    index => "ssh_auth-%{+YYYY.MM}"
}
  stdout { codec => rubydebug }
}

You can also check sample Logstash pipelines here.

If you need to debug Logstash Grok Filters to confirm that they can actually parse your logs into the required fields, see the link below on how to debug Logstash Grok filters.

How to Debug Logstash Grok Filters

Test Logstash Configuration

Once you are done with configurations, run the command below to verify the Logstash configuration before you can start it.

sudo -u logstash /usr/share/logstash/bin/logstash --path.settings /etc/logstash -t
Sending Logstash logs to /var/log/logstash which is now configured via log4j2.properties
Configuration OK
[2019-05-01T15:26:34,908][INFO ][logstash.runner ] Using config.test_and_exit mode. Config Validation Result: OK. Exiting Logstash

Well, if you get Configuration OK then you are good to go.

To run Logstash and load a specific configuration file for debugging, you can execute the command below;

sudo -u logstash /usr/share/logstash/bin/logstash -f /etc/logstash/conf.d/config-file.conf

Running Logstash

You can now start and enable Logstash to run on system boot.

systemctl start logstash
systemctl enable logstash

You can also check the logstash configuration file for any errors, /var/log/logstash/logstash-plain.log.

Once you are done with configuration, proceed to install and configure Filebeat data shippers.

Install and Configure Filebeat 7 on Ubuntu 18.04/Debian 9.8

That is all on how to install and configure Logstash 7 on Ubuntu 18.04/Debian 9.8.

Reference:

Getting Started with Logstash

Related Guides;

Install Logstash 7 on Fedora 30/Fedora 29/CentOS 7

Install Elastic Stack 7 on Fedora 30/Fedora 29/CentOS 7

LEAVE A REPLY

Please enter your comment!
Please enter your name here