Install Logstash 9 on Ubuntu/Debian

|
Published:
|
|
logstash

This guide is about how to install Logstash 9 on Ubuntu/Debian as a continuation of our guide on how to setup Elastic Stack 9. We have already covered the installation of Elasticsearch and Kibana.

Install Elastic/ELK Stack on Ubuntu/Debian

According to the installation order, Logstash is the third component in the line. Note that for Elastic Stack to function well, the versions of all the components must match.

Note that you can install Logstash 9 on the same node where Elasticsearch is running or on a different node.

Install Logstash 9 on Ubuntu/Debian

Install Elastic 9.x Repositories

Info
Skip this step if you are installing Logstash 9 on the same node as Kibana and Elasticsearch. Repos should have already been installed.

You can install Logstash 9 by either using DEB binary or right from APT repositories.

We prefer the installation from APT repositories as this ensure a seamless upgrade in case of new package releases.

Thus, to install Logstash 9 via the APT repos;

Elevate your privileges;

sudo su -

Install Repository public signing key

wget -qO - https://artifacts.elastic.co/GPG-KEY-elasticsearch | gpg --dearmor > /etc/apt/trusted.gpg.d/elasticsearch.gpg

Install Elastic Stack APT repositories

apt install apt-transport-https
echo "deb https://artifacts.elastic.co/packages/9.x/apt stable main" | sudo tee /etc/apt/sources.list.d/elastic-9.x.list

Install Logstash 9 on Ubuntu/Debian

Install Logstash 9;

apt update
apt install logstash

Configure Logstash 9

Once the installation is done, proceed to configure Logstash.

Logstash data processing pipeline has three sections;

  • INPUT: input section is used to ingest data from different endpoints into Logstash.
  • FILTERS: which processes and transform the data received.
  • OUTPUT: which stashes processed data into a specified destination, which can be Elasticsearch.

You can read more about Logstash Pipeline here.

While configuring Logstash, you can have separate configuration files each for INPUT, FILTERS and OUTPUT. You can as well have single configuration file for all the sections. This guides uses a single configuration files.

Configure Logstash Input plugin

To kick off with, you need to define how data will be ingested into Logstash. For example, to configure Logstash to receive data from elastic Beats on TCP port 5044, the input configuration may look like;

input {
  beats {
    port => 5044
  }
}

Configure Logstash Filters

Now that we have defined an Logstash Input plugin as Beats proceed to configure a filter plugin to process events received from the beats. This guide uses grok filter plugin. You can read about other plugins here.

For demonstration purposes, we are going to configure beats to collect SSH authentication events from Linux nodes. Hence, we are going to create a filter to process such kind of events as shown below.

The grok pattern used in this example below matches the ssh authentication log lines such as in below;

May  1 13:15:23 elk sshd[1387]: Failed password for testuser from 192.168.0.102 port 60004 ssh2
May  1 13:08:30 elk sshd[1338]: Accepted password for testuser from 192.168.0.102 port 59958 ssh2

Sample grok filter:

filter {
  grok {
    match => { "message" => "%{SYSLOGTIMESTAMP:timestamp}\s+%{IPORHOST:dst_host}\s+%{WORD:syslog_program}\[\d+\]:\s+(?<status>\w+\s+password)\s+for\s+%{USER:auth_user}\s+from\s+%{SYSLOGHOST:src_host}.*" }
    add_field => { "activity" => "SSH Logins" }
    add_tag => "linux_auth"
    }
}

In summary, this grok filter is specifically created to:

  • Identify SSH login events (both successful and failed authentication attempts).
  • Extract relevant metadata:
    • timestamp: When the event occurred.
    • dst_host: Host receiving the SSH connection.
    • syslog_program: Logging service (e.g., sshd).
    • status: Whether the attempt was “Accepted password” or “Failed password”.
    • auth_user: Username used for authentication.
    • src_host: Source IP address of the SSH client.
  • Add a custom field:
    • "activity" = "SSH Logins"
  • Add a tag:
    • "linux_auth"

Kibana comes bundled with Grok Debugger which is similar to herokuapp grokdebugger. You can access Kibana Grok debugger under Dev Tools > Grok Debugger. You can utilize this to generate the correct grok patterns.

You can as well check common Logstash grok patterns here.

Also for the purposes of making demo a simple, we will add a filter to drop all the events that do not match our grok filter for SSH authentication events used above;

if "_grokparsefailure" in [tags] { drop {} }

Such that our filter looks like;

filter {
  grok {
    match => { "message" => "%{SYSLOGTIMESTAMP:timestamp}\s+%{IPORHOST:dst_host}\s+%{WORD:syslog_program}\[\d+\]:\s+(?<status>\w+\s+password)\s+for\s+%{USER:auth_user}\s+from\s+%{SYSLOGHOST:src_host}.*" }
    add_field => { "activity" => "SSH Logins" }
    add_tag => "linux_auth"
    }
  if "_grokparsefailure" in [tags] { drop {} }
}

Configure Logstash Output

There are different output plugins that enables Logstash to sent event data to particular destinations. This guide uses elasticsearch output plugin that enables Logstash to sent data directly to Elasticsearch.

Note the with Elastic Stack 9, Elasticsearch 9 is configured with SSL/TLS as well as authentication is enabled by default.

This means that if you are using Elasticsearch output;

  • you need to have the Elastic Stack SSL/TLS CA certificates to be able to connect to Elasticsearch.
  • You also need to have the right credentials to be able to write to an index on Elasticsearch.

Logstash output configuration may look like;


output {
  elasticsearch {
    hosts => ["https://elk-node-01.kifarunix-demo.com:9200"]
    ssl_certificate_authorities => '/etc/logstash/elasticsearch-ca.crt'
    user => 'elastic'
    password => '<elastic_user_password>'
  }
}

This configuration;

  • sends data to Elasticsearch running on host elk-node-01.kifarunix-demo.com. The name should be resolvable.
  • Ensure port 9200/TCP is opened on firewall.
  • uses the superuser Elasticsearch user (elastic) which can write to any index. Consider creating a different user and give the specific permissions to write to specific index only. We use the elastic user password from our guide on installing elastic stack 9.
  • will create and write data to the default Logstash Index, logstash-* index on Elasticsearch (since we used the superuser)

Check how to create publishing roles for specific user on a specific index.

  • Download Elasticsearch CA certificate and save it to a file specified by ssl_certificate_authorities parameter above, /etc/logstash/elasticsearch-ca.crt.
openssl s_client -showcerts -connect elk-node-01.kifarunix-demo.com:9200 </dev/null 2>/dev/null \
| openssl x509 > /etc/logstash/elasticsearch-ca.crt

Putting togerther the NPUT, FILTER and OUTPUT configs in one file, then create a configuration file as shown below;

vim /etc/logstash/conf.d/ssh-authentication.conf
input {
  beats {
    port => 5044
  }
}
filter {
  grok {
    match => { "message" => "%{SYSLOGTIMESTAMP:timestamp}\s+%{IPORHOST:dst_host}\s+%{WORD:syslog_program}\[\d+\]:\s+(?<status>\w+\s+password)\s+for\s+%{USER:auth_user}\s+from\s+%{SYSLOGHOST:src_host}.*" }
    add_field => { "activity" => "SSH Logins" }
    add_tag => "linux_auth"
    }
  if "_grokparsefailure" in [tags] { drop {} }
}
output {
  elasticsearch {
    hosts => ["https://elk-node-01.kifarunix-demo.com:9200"]
    ssl_certificate_authorities => '/etc/logstash/elasticsearch-ca.crt'
    user => 'elastic'
    password => '<elastic_user_password>'
  }
}

You can also store the password in a keystore instead of placing in the configuration in plaintext.

If you need to sent the event data to standard output for the purposes of debugging plugin configurations, then you would add the line, stdout { codec => rubydebug } to the output configuration section.


output {
  elasticsearch {
    hosts => ["https://elk-node-01.kifarunix-demo.com:9200"]
    ssl_certificate_authorities => '/etc/logstash/elasticsearch-ca.crt'
    user => 'elastic'
    password => '<elastic_user_password>'
  }
}
  stdout { codec => rubydebug }
}

You can also check sample Logstash pipelines here.

If you need to debug Logstash Grok Filters to confirm that they can actually parse your logs into the required fields, see the link below on how to debug Logstash Grok filters.

How to Debug Logstash Grok Filters

Test Logstash Configuration

Once you are done with configurations, run the command below to verify the Logstash configuration before you can start it.

sudo -u logstash /usr/share/logstash/bin/logstash --path.settings /etc/logstash -t
...
Configuration OK
[2025-05-18T10:15:01,408][INFO ][logstash.runner          ] Using config.test_and_exit mode. Config Validation Result: OK. Exiting Logstash

Well, if you get Configuration OK then you are good to go.

To run Logstash and load a specific configuration file for debugging, you can execute the command below;

sudo -u logstash /usr/share/logstash/bin/logstash -f /etc/logstash/conf.d/CONFIG-FILE.conf

For example, to use our SSH config file above;

sudo -u logstash /usr/share/logstash/bin/logstash -f /etc/logstash/conf.d/ssh-authentication.conf

If you get the error “Your settings are invalid. Reason: Path “/usr/share/logstash/data” must be a writable directory. It is not writable.

Then, you need to ensure that the data is writable by logstash user. You can simply change the ownership f this directory to logstash user. Remember, logstash is configured to run under user/group logstash.

Hence;

chown -R logstash: /usr/share/logstash/data

Then, re-run the command.

Sample debugging command output:

Using bundled JDK: /usr/share/logstash/jdk
WARNING: Could not find logstash.yml which is typically located in $LS_HOME/config or /etc/logstash. You can specify the path using --path.settings. Continuing using the defaults
Could not find log4j2 configuration at path /usr/share/logstash/config/log4j2.properties. Using default config which logs errors to the console
[INFO ] 2025-05-18 10:19:08.964 [main] runner - Starting Logstash {"logstash.version"=>"9.0.1", "jruby.version"=>"jruby 9.4.9.0 (3.1.4) 2024-11-04 547c6b150e OpenJDK 64-Bit Server VM 21.0.7+6-LTS on 21.0.7+6-LTS +indy +jit [x86_64-linux]"}
[INFO ] 2025-05-18 10:19:08.970 [main] runner - JVM bootstrap flags: [-Xms1g, -Xmx1g, -Djava.awt.headless=true, -Dfile.encoding=UTF-8, -Djruby.compile.invokedynamic=true, -XX:+HeapDumpOnOutOfMemoryError, -Djava.security.egd=file:/dev/urandom, -Dlog4j2.isThreadContextMapInheritable=true, -Djruby.regexp.interruptible=true, -Djdk.io.File.enableADS=true, --add-exports=jdk.compiler/com.sun.tools.javac.api=ALL-UNNAMED, --add-exports=jdk.compiler/com.sun.tools.javac.file=ALL-UNNAMED, --add-exports=jdk.compiler/com.sun.tools.javac.parser=ALL-UNNAMED, --add-exports=jdk.compiler/com.sun.tools.javac.tree=ALL-UNNAMED, --add-exports=jdk.compiler/com.sun.tools.javac.util=ALL-UNNAMED, --add-opens=java.base/java.security=ALL-UNNAMED, --add-opens=java.base/java.io=ALL-UNNAMED, --add-opens=java.base/java.nio.channels=ALL-UNNAMED, --add-opens=java.base/sun.nio.ch=ALL-UNNAMED, --add-opens=java.management/sun.management=ALL-UNNAMED, -Dio.netty.allocator.maxOrder=11]
[INFO ] 2025-05-18 10:19:09.071 [main] StreamReadConstraintsUtil - Jackson default value override `logstash.jackson.stream-read-constraints.max-string-length` configured to `200000000` (logstash default)
[INFO ] 2025-05-18 10:19:09.071 [main] StreamReadConstraintsUtil - Jackson default value override `logstash.jackson.stream-read-constraints.max-number-length` configured to `10000` (logstash default)
[INFO ] 2025-05-18 10:19:09.071 [main] StreamReadConstraintsUtil - Jackson default value override `logstash.jackson.stream-read-constraints.max-nesting-depth` configured to `1000` (logstash default)
[INFO ] 2025-05-18 10:19:09.076 [main] settings - Creating directory {:setting=>"path.queue", :path=>"/usr/share/logstash/data/queue"}
[INFO ] 2025-05-18 10:19:09.077 [main] settings - Creating directory {:setting=>"path.dead_letter_queue", :path=>"/usr/share/logstash/data/dead_letter_queue"}
[WARN ] 2025-05-18 10:19:09.177 [LogStash::Runner] multilocal - Ignoring the 'pipelines.yml' file because command line options are specified
[INFO ] 2025-05-18 10:19:09.205 [LogStash::Runner] agent - No persistent UUID file found. Generating new UUID {:uuid=>"1ceba6ed-f01d-4505-b3a1-2cb3ff608660", :path=>"/usr/share/logstash/data/uuid"}
[INFO ] 2025-05-18 10:19:09.612 [Api Webserver] agent - Successfully started Logstash API endpoint {:port=>9600, :ssl_enabled=>false}
[INFO ] 2025-05-18 10:19:09.842 [Converge PipelineAction::Create<main>] Reflections - Reflections took 64 ms to scan 1 urls, producing 149 keys and 521 values
[INFO ] 2025-05-18 10:19:10.108 [Converge PipelineAction::Create<main>] javapipeline - Pipeline `main` is configured with `pipeline.ecs_compatibility: v8` setting. All plugins in this pipeline will default to `ecs_compatibility => v8` unless explicitly configured otherwise.
[INFO ] 2025-05-18 10:19:10.122 [[main]-pipeline-manager] elasticsearch - New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["https://elk-node-01.kifarunix-demo.com:9200"]}
[INFO ] 2025-05-18 10:19:10.207 [[main]-pipeline-manager] elasticsearch - Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[https://elastic:[email protected]:9200/]}}
[INFO ] 2025-05-18 10:19:10.456 [[main]-pipeline-manager] elasticsearch - Connected to ES instance {:url=>"https://elastic:[email protected]:9200/"}
[INFO ] 2025-05-18 10:19:10.456 [[main]-pipeline-manager] elasticsearch - Elasticsearch version determined (9.0.1) {:es_version=>9}
[INFO ] 2025-05-18 10:19:10.466 [[main]-pipeline-manager] elasticsearch - Data streams auto configuration (`data_stream => auto` or unset) resolved to `true`
[WARN ] 2025-05-18 10:19:10.468 [[main]-pipeline-manager] grok - ECS v8 support is a preview of the unreleased ECS v8, and uses the v1 patterns. When Version 8 of the Elastic Common Schema becomes available, this plugin will need to be updated
[INFO ] 2025-05-18 10:19:10.598 [[main]-pipeline-manager] javapipeline - Starting pipeline {:pipeline_id=>"main", "pipeline.workers"=>4, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50, "pipeline.max_inflight"=>500, "pipeline.sources"=>["/etc/logstash/conf.d/ssh-authentication.conf"], :thread=>"#<Thread:0x19e22503 /usr/share/logstash/logstash-core/lib/logstash/java_pipeline.rb:138 run>"}
[INFO ] 2025-05-18 10:19:11.150 [[main]-pipeline-manager] javapipeline - Pipeline Java execution initialization time {"seconds"=>0.55}
[INFO ] 2025-05-18 10:19:11.153 [[main]-pipeline-manager] beats - Starting input listener {:address=>"0.0.0.0:5044"}
[INFO ] 2025-05-18 10:19:11.160 [[main]-pipeline-manager] javapipeline - Pipeline started {"pipeline.id"=>"main"}
[INFO ] 2025-05-18 10:19:11.165 [Agent thread] agent - Pipelines running {:count=>1, :running_pipelines=>[:main], :non_running_pipelines=>[]}
[INFO ] 2025-05-18 10:19:11.233 [[main]<beats] Server - Starting server on port: 5044

Press CTRL+C to stop the command.

Running Logstash as a Service

You can now start and enable Logstash to run on system boot.

systemctl enable --now logstash

Check the status;

systemctl status logstash
● logstash.service - logstash
     Loaded: loaded (/usr/lib/systemd/system/logstash.service; enabled; preset: enabled)
     Active: active (running) since Sun 2025-05-18 10:22:40 UTC; 29s ago
   Main PID: 2594 (java)
      Tasks: 49 (limit: 9440)
     Memory: 654.2M (peak: 654.9M)
        CPU: 31.470s
     CGroup: /system.slice/logstash.service
             └─2594 /usr/share/logstash/jdk/bin/java -Xms1g -Xmx1g -Djava.awt.headless=true -Dfile.encoding=UTF-8 -Djruby.compile.invokedynamic=true -XX:+HeapDumpOnOutOfMemoryError -Djava.security.egd=file>

May 18 10:22:49 elk-node-01.kifarunix-demo.com logstash[2594]: [2025-05-18T10:22:49,182][INFO ][logstash.outputs.elasticsearch][main] Connected to ES instance {:url=>"https://elastic:[email protected]>
May 18 10:22:49 elk-node-01.kifarunix-demo.com logstash[2594]: [2025-05-18T10:22:49,183][INFO ][logstash.outputs.elasticsearch][main] Elasticsearch version determined (9.0.1) {:es_version=>9}
May 18 10:22:49 elk-node-01.kifarunix-demo.com logstash[2594]: [2025-05-18T10:22:49,189][INFO ][logstash.outputs.elasticsearch][main] Data streams auto configuration (`data_stream => auto` or unset) resolv>
May 18 10:22:49 elk-node-01.kifarunix-demo.com logstash[2594]: [2025-05-18T10:22:49,191][WARN ][logstash.filters.grok    ][main] ECS v8 support is a preview of the unreleased ECS v8, and uses the v1 patter>
May 18 10:22:49 elk-node-01.kifarunix-demo.com logstash[2594]: [2025-05-18T10:22:49,263][INFO ][logstash.javapipeline    ][main] Starting pipeline {:pipeline_id=>"main", "pipeline.workers"=>4, "pipeline.ba>
May 18 10:22:49 elk-node-01.kifarunix-demo.com logstash[2594]: [2025-05-18T10:22:49,774][INFO ][logstash.javapipeline    ][main] Pipeline Java execution initialization time {"seconds"=>0.51}
May 18 10:22:49 elk-node-01.kifarunix-demo.com logstash[2594]: [2025-05-18T10:22:49,776][INFO ][logstash.inputs.beats    ][main] Starting input listener {:address=>"0.0.0.0:5044"}
May 18 10:22:49 elk-node-01.kifarunix-demo.com logstash[2594]: [2025-05-18T10:22:49,783][INFO ][logstash.javapipeline    ][main] Pipeline started {"pipeline.id"=>"main"}
May 18 10:22:49 elk-node-01.kifarunix-demo.com logstash[2594]: [2025-05-18T10:22:49,797][INFO ][logstash.agent           ] Pipelines running {:count=>1, :running_pipelines=>[:main], :non_running_pipelines=>
May 18 10:22:49 elk-node-01.kifarunix-demo.com logstash[2594]: [2025-05-18T10:22:49,823][INFO ][org.logstash.beats.Server][main][bf748ab86c1b33051fb103f49f7e93db631a1d71a46409fda6525fd15469ca14] Starting s>
lines 1-20/20 (END)

You can also check the Logstash configuration file for any errors, /var/log/logstash/logstash-plain.log.

tail -f /var/log/logstash/logstash-plain.log

Sample logs:

[2025-05-18T10:22:49,182][INFO ][logstash.outputs.elasticsearch][main] Connected to ES instance {:url=>"https://elastic:[email protected]:9200/"}
[2025-05-18T10:22:49,183][INFO ][logstash.outputs.elasticsearch][main] Elasticsearch version determined (9.0.1) {:es_version=>9}
[2025-05-18T10:22:49,189][INFO ][logstash.outputs.elasticsearch][main] Data streams auto configuration (`data_stream => auto` or unset) resolved to `true`
[2025-05-18T10:22:49,191][WARN ][logstash.filters.grok    ][main] ECS v8 support is a preview of the unreleased ECS v8, and uses the v1 patterns. When Version 8 of the Elastic Common Schema becomes available, this plugin will need to be updated
[2025-05-18T10:22:49,263][INFO ][logstash.javapipeline    ][main] Starting pipeline {:pipeline_id=>"main", "pipeline.workers"=>4, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50, "pipeline.max_inflight"=>500, "pipeline.sources"=>["/etc/logstash/conf.d/ssh-authentication.conf"], :thread=>"#"}
[2025-05-18T10:22:49,774][INFO ][logstash.javapipeline    ][main] Pipeline Java execution initialization time {"seconds"=>0.51}
[2025-05-18T10:22:49,776][INFO ][logstash.inputs.beats    ][main] Starting input listener {:address=>"0.0.0.0:5044"}
[2025-05-18T10:22:49,783][INFO ][logstash.javapipeline    ][main] Pipeline started {"pipeline.id"=>"main"}
[2025-05-18T10:22:49,797][INFO ][logstash.agent           ] Pipelines running {:count=>1, :running_pipelines=>[:main], :non_running_pipelines=>[]}
[2025-05-18T10:22:49,823][INFO ][org.logstash.beats.Server][main][bf748ab86c1b33051fb103f49f7e93db631a1d71a46409fda6525fd15469ca14] Starting server on port: 5044

Confirm the ports are now opened and listening:

ss -altnp | grep 5044
LISTEN 0      4096                    *:5044            *:*    users:(("java",pid=2594,fd=122))                     

Conclusion

And that is how easy it is to install and configure Logstash 9 on Ubuntu/Debian.

In the next guide

Other Tutorials

Setup Multinode Elasticsearch 9.x Cluster

Configure Logstash Elasticsearch Basic Authentication

Logstash: Write Specific Events to Specific Index

SUPPORT US VIA A VIRTUAL CUP OF COFFEE

We're passionate about sharing our knowledge and experiences with you through our blog. If you appreciate our efforts, consider buying us a virtual coffee. Your support keeps us motivated and enables us to continually improve, ensuring that we can provide you with the best content possible. Thank you for being a coffee-fueled champion of our work!

Photo of author
Kifarunix
Linux Certified Engineer, with a passion for open-source technology and a strong understanding of Linux systems. With experience in system administration, troubleshooting, and automation, I am skilled in maintaining and optimizing Linux infrastructure.

Leave a Comment