Deploy a Single Node ELK Stack Cluster on Docker Containers

0
3694

Last updated on January 30th, 2023 at 08:23 pm

How to run elk stack on docker? In this tutorial, we are going to learn how to deploy a single node ELK Stack cluster on Docker containers. Elastic Stack, the next evolution of the famous ELK stack is a group of open source software projects: Elasticsearch, Logstash, and Kibana and Beats. Elasticsearch is a search and analytics engine, Logstash is a server‑side data processing pipeline that ingests data from multiple sources simultaneously, transforms it, and then sends it to a “stash” like Elasticsearch, Kibana lets users visualize data with charts and graphs in Elasticsearch and Beats are the data shippers. They ship system logs, network, infrastructure data, etc to either Logstash for further processing or Elasticsearch for indexing.

Deploy a Single Node ELK Stack Cluster on Docker Containers

In this tutorial, therefore, we will learn how to deploy ELK Stack on Docker containers using Docker and Docker compose.

Docker is a platform that enables developers and system administrators to build, run, and share applications with containers. It provides command line interface tools such as docker and docker-compose that are used for managing Docker containers. While docker is a Docker cli for managing single Docker containers, docker-compose on the other hand is used for running and managing multiple Docker containers.

Install Docker Engine

Depending on the your host system distribution, you need to install the Docker engine. You can follow the links below to install Docker Engine on Ubuntu/Debian/CentOS 8.

Install and Use Docker on Debian 10 Buster

Install Docker CE on Ubuntu 20.04

Install and Use Docker CE on CentOS 8

Checking Installed Docker version;

docker version
Client: Docker Engine - Community
 Version:           19.03.12
 API version:       1.40
 Go version:        go1.13.10
 Git commit:        48a66213fe
 Built:             Mon Jun 22 15:45:44 2020
 OS/Arch:           linux/amd64
 Experimental:      false

Server: Docker Engine - Community
 Engine:
  Version:          19.03.12
  API version:      1.40 (minimum version 1.12)
  Go version:       go1.13.10
  Git commit:       48a66213fe
  Built:            Mon Jun 22 15:44:15 2020
  OS/Arch:          linux/amd64
  Experimental:     false
 containerd:
  Version:          1.2.13
  GitCommit:        7ad184331fa3e55e52b890ea95e65ba581ae3429
 runc:
  Version:          1.0.0-rc10
  GitCommit:        dc9208a3303feef5b3839f4323d9beb36df0a9dd
 docker-init:
  Version:          0.18.0
  GitCommit:        fec3683

Install Docker Compose

For Docker compose to work, ensure that you have Docker Engine installed. You can follow the links above to install Docker Engine.

Check the current stable release version of Docker Compose on their Github release page. As of this writing, the Docker Compose version 1.26.2 is the current stable release.

Download and install Docker Compose on a Linux system. Be sure to replace the VER variable below with the value of the current stable release version of Docker compose.

VER=1.26.2
sudo curl -L "https://github.com/docker/compose/releases/download/$VER/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose

This downloads docker compose tool to /usr/local/bin directory.

Make the Docker compose binary executable;

chmod +x /usr/local/bin/docker-compose

You should now be able to use Docker compose (docker-compose) on the CLI.

Check the version of installed Docker compose to confirm that it is working as expected.

docker-compose version
docker-compose version 1.26.2, build eefe0d31
docker-py version: 4.2.2
CPython version: 3.7.7
OpenSSL version: OpenSSL 1.1.0l  10 Sep 2019

Allocate the Docker Engine Host Enough RAM

Enough is used relatively here. But ensure that your Docker Engine host has at least 4GB RAM allocated.

free -h
              total        used        free      shared  buff/cache   available
Mem:          3.8Gi       236Mi       2.7Gi       1.0Mi       894Mi       3.4Gi
Swap:         2.0Gi          0B       2.0Gi

Note: We are running both Docker and Docker compose as non root user. To be able to do this, ensure that your add your standard user to the docker group.

For example, am running this setup as user koromicha. So, add the user to docker group. Replace the username accordingly.

usermod -aG docker koromicha

Log out and log in again as the user that is added to the docker group and you should be able to run the docker and docker-compose CLI tools.

Deploy a Single Node ELK Stack Cluster on Docker Containers Using Docker Compose

In this setup, we will deploy a single node Elastic Stack cluster with all the three components, Elasticsearch, Logstash and Kibana containers running on the same host as Docker containers.

To begin, create a parent directory from where you will build your stack from.

mkdir $HOME/elastic-docker

Create Docker Compose file for Deploying Elastic Stack

According to Overview of the Docker Compose, “Docker compose is a tool for defining and running multi-container Docker applications. With Compose, you use a YAML file to configure your application’s services. Then, with a single command, you create and start all the services from your configuration“.

Using Compose is basically a three-step process:

  1. Define your app’s environment with a Dockerfile so it can be reproduced anywhere.
  2. Define the services that make up your app in docker-compose.yml so they can be run together in an isolated environment.
  3. Run docker-compose up and Compose starts and runs your entire app

In this setup, we will build everything using a Docker Compose file.

Setup Docker Compose file for Elastic Stack

Now that we have defined how to pull the Docker images, it is time we create the Docker Compose file for our deployment.

vim $HOME/elastic-docker/docker-compose.yml
version: '3.8'
services:
  elasticsearch:
    container_name: kifarunix-demo-es
    image: docker.elastic.co/elasticsearch/elasticsearch:7.8.1
    environment:
      - node.name=kifarunix-demo-es
      - cluster.name=es-docker-cluster
      - discovery.type=single-node
      - bootstrap.memory_lock=true
      - "ES_JAVA_OPTS=-Xms512m -Xmx512m"
    ulimits:
      memlock:
        soft: -1
        hard: -1
    volumes:
      - es-data:/usr/share/elasticsearch/data
    ports:
      - 9200:9200
    networks:
      - elastic

  kibana:
    image: docker.elastic.co/kibana/kibana:7.8.1
    container_name: kifarunix-demo-kibana
    environment:
      ELASTICSEARCH_URL: http://kifarunix-demo-es:9200
      ELASTICSEARCH_HOSTS: http://kifarunix-demo-es:9200
    ports:
      - 5601:5601
    networks:
      - elastic
  
  logstash:
    image: docker.elastic.co/logstash/logstash:7.8.1
    container_name: kifarunix-demo-ls
    ports:
      - "5044:5044"
    volumes:
      - ./logstash/conf.d/:/usr/share/logstash/pipeline/:ro
    networks:
      - elastic

volumes:
  es-data:
    driver: local

networks:
  elastic:
    driver: bridge

For a complete description of all the Docker compose configuration options, refer to Docker compose reference page.

Define Logstash Data Processing Pipeline

In this setup, we will configure Logstash to receive event data from Beats (Filebeat to be specific) for further processing and stashing onto the search analytics engine, Elasticsearch.

Note that Logstash is only necessary if you need to apply further processing to your event data. For example, extracting custom fields from the event data, mutating the event data etc. Otherwise, you can push the data directly to Elasticsearch from Beats.

In this setup, we will use a sample Logstash processing pipeline for ModSecurity audit logs;

mkdir $HOME/elastic-docker/logstash/conf.d
vim $HOME/elastic-docker/logstash/conf.d/modsec.conf
input {
  beats {
    port => 5044
  }
}
filter {
    # Extract event time, log severity level, source of attack (client), and the alert message.
    grok {
      match => { "message" => "(?<event_time>%{MONTH}\s%{MONTHDAY}\s%{TIME}\s%{YEAR})\] \[\:%{LOGLEVEL:log_level}.*client\s%{IPORHOST:src_ip}:\d+]\s(?<alert_message>.*)" }
    }
    # Extract Rules File from Alert Message
    grok {
      match => { "alert_message" => "(?<rulesfile>\[file \"(/.+.conf)\"\])" }
    }	
    grok {
      match => { "rulesfile" => "(?<rules_file>/.+.conf)" }
    }	
    # Extract Attack Type from Rules File
    grok {
      match => { "rulesfile" => "(?<attack_type>[A-Z]+-[A-Z][^.]+)" }
    }	
    # Extract Rule ID from Alert Message
    grok {
      match => { "alert_message" => "(?<ruleid>\[id \"(\d+)\"\])" }
    }	
    grok {
      match => { "ruleid" => "(?<rule_id>\d+)" }
    }
    # Extract Attack Message (msg) from Alert Message 	
    grok {
      match => { "alert_message" => "(?<msg>\[msg \S(.*?)\"\])" }
    }	
    grok {
      match => { "msg" => "(?<alert_msg>\"(.*?)\")" }
    }
    # Extract the User/Scanner Agent from Alert Message	
    grok {
      match => { "alert_message" => "(?<scanner>User-Agent' \SValue: `(.*?)')" }
    }	
    grok {
      match => { "scanner" => "(?<user_agent>:(.*?)\')" }
    }	
    grok {
      match => { "alert_message" => "(?<agent>User-Agent: (.*?)\')" }
    }	
    grok {
      match => { "agent" => "(?<user_agent>: (.*?)\')" }
    }	
    # Extract the Target Host
    grok {
      match => { "alert_message" => "(hostname \"%{IPORHOST:dst_host})" }
    }	
    # Extract the Request URI
    grok {
      match => { "alert_message" => "(uri \"%{URIPATH:request_uri})" }
    }
    grok {
      match => { "alert_message" => "(?<ref>referer: (.*))" }
    }	
    grok {
      match => { "ref" => "(?<referer> (.*))" }
    }
    mutate {
      # Remove unnecessary characters from the fields.
      gsub => [
        "alert_msg", "[\"]", "",
        "user_agent", "[:\"'`]", "",
        "user_agent", "^\s*", "",
        "referer", "^\s*", ""
      ]
      # Remove the Unnecessary fields so we can only remain with
      # General message, rules_file, attack_type, rule_id, alert_msg, user_agent, hostname (being attacked), Request URI and Referer. 
      remove_field => [ "alert_message", "rulesfile", "ruleid", "msg", "scanner", "agent", "ref" ]
    }	
}
output {
   elasticsearch {
     hosts => ["kifarunix-demo-es:9200"]
   }
}

Check Docker Compose file Syntax;

docker-compose -f docker-compose.yml config

Or if you are in the same directory where docker-compose.yml file is located, simply run;

docker-compose config

Deploy Elastic Stack Using Docker Compose file

Everything is now setup and we are ready to build and start our Elastic Stack instances using the docker-compose up command.

Navigate to the main directory where the Docker compose file is located. In my setup the directory is $HOME/elastic-docker.

cd $HOME/elastic-docker
docker-compose up

The command creates and starts the containers in foreground.

Sample output;

...
kifarunix-demo-es | {"type": "server", "timestamp": "2020-08-10T19:35:00,516Z", "level": "INFO", "component": "o.e.c.m.MetadataIndexTemplateService", "cluster.name": "es-docker-cluster", "node.name": "kifarunix-demo-es", "message": "adding template [.monitoring-alerts-7] for index patterns [.monitoring-alerts-7]", "cluster.uuid": "DBy4Mwk-TB2Jum_AWDY0jw", "node.id": "4Fb4-CZ0QhG2KcZL79-8cw"  }
kifarunix-demo-ls | [2020-08-10T19:35:00,590][INFO ][logstash.inputs.beats    ][main] Beats inputs: Starting input listener {:address=>"0.0.0.0:5044"}
kifarunix-demo-ls | [2020-08-10T19:35:00,606][INFO ][logstash.javapipeline    ][main] Pipeline started {"pipeline.id"=>"main"}
kifarunix-demo-ls | [2020-08-10T19:35:00,672][INFO ][logstash.agent           ] Pipelines running {:count=>1, :running_pipelines=>[:main], :non_running_pipelines=>[]}
kifarunix-demo-ls | [2020-08-10T19:35:00,730][INFO ][org.logstash.beats.Server][main][ee92a68a4dc1b148e25ac3c899680db31f95563138e922a364e18e3dc052d084] Starting server on port: 5044
kifarunix-demo-ls | [2020-08-10T19:35:00,947][INFO ][logstash.agent           ] Successfully started Logstash API endpoint {:port=>9600}
...
...
kifarunix-demo-kibana | {"type":"log","@timestamp":"2020-08-10T19:36:49Z","tags":["status","plugin:[email protected]","info"],"pid":8,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
kifarunix-demo-kibana | {"type":"log","@timestamp":"2020-08-10T19:36:49Z","tags":["listening","info"],"pid":8,"message":"Server running at http://0:5601"}
kifarunix-demo-es | {"type": "server", "timestamp": "2020-08-10T19:36:49,957Z", "level": "INFO", "component": "o.e.c.m.MetadataMappingService", "cluster.name": "es-docker-cluster", "node.name": "kifarunix-demo-es", "message": "[.kibana_task_manager_1/a3B8lwzxQjiNHtFEAExOaQ] update_mapping [_doc]", "cluster.uuid": "DBy4Mwk-TB2Jum_AWDY0jw", "node.id": "4Fb4-CZ0QhG2KcZL79-8cw"  }
kifarunix-demo-kibana | {"type":"log","@timestamp":"2020-08-10T19:36:50Z","tags":["info","http","server","Kibana"],"pid":8,"message":"http server running at http://0:5601"}
...

When you stop the docker-compose up command, all containers are stopped.

From another console, you can check running containers. Note that you can user docker-compose command as you would docker command.

docker-compose ps
        Name                       Command               State                Ports              
-------------------------------------------------------------------------------------------------
kifarunix-demo-es       /tini -- /usr/local/bin/do ...   Up      0.0.0.0:9200->9200/tcp, 9300/tcp
kifarunix-demo-kibana   /usr/local/bin/dumb-init - ...   Up      0.0.0.0:5601->5601/tcp          
kifarunix-demo-ls       /usr/local/bin/docker-entr ...   Up      0.0.0.0:5044->5044/tcp, 9600/tcp

From the output, you can see that the containers are running and their ports exposed on the host (any IP address) to allow external access.

You can run the stack containers in background using the -d option. You can press ctrl+c to cancel the command and stop the containers.

To relaunch containers in background

docker-compose up -d
Starting kifarunix-demo-ls     ... done
Starting kifarunix-demo-es     ... done
Starting kifarunix-demo-kibana ... done

You can as well list the running containers using docker command;

docker ps
CONTAINER ID        IMAGE                                                 COMMAND                  CREATED             STATUS              PORTS                              NAMES
130eec8df661        docker.elastic.co/kibana/kibana:7.8.1                 "/usr/local/bin/dumb…"   38 minutes ago      Up About a minute   0.0.0.0:5601->5601/tcp             kifarunix-demo-kibana
6648df61c44b        docker.elastic.co/logstash/logstash:7.8.1             "/usr/local/bin/dock…"   41 minutes ago      Up About a minute   0.0.0.0:5044->5044/tcp, 9600/tcp   kifarunix-demo-ls
db9936abbee2        docker.elastic.co/elasticsearch/elasticsearch:7.8.1   "/tini -- /usr/local…"   41 minutes ago      Up About a minute   0.0.0.0:9200->9200/tcp, 9300/tcp   kifarunix-demo-es

To find the details of each container, use docker inspect <container-name> command. For example

docker inspect kifarunix-demo-es

To get the logs of a container, use the command docker logs [OPTIONS] CONTAINER. For example, to get Elasticsearch container logs;

docker logs kifarunix-demo-es

If you need to check specific number of logs, you can use the tail option. E.g to get the last 50 log lines;

docker logs --tail 50 kifarunix-demo-es

Accessing Kibana Container from Browser

Once the stack is up and running, you can access Kibana externally using the host IP address and the port on which it is exposed on. In our setup, Kibana container port 5601 is exposed on the same port on the host;

docker port kifarunix-demo-kibana
5601/tcp -> 0.0.0.0:5601

This means that you can access Kibana container port on via any interface on the host, port 5601. Similarly, you can check container port exposure using the command above.

Therefore, you can access Kibana using your Container host address, http://<IP-Address>:5601.

Deploy a Single Node Elastic Stack Cluster on Docker Containers

As you can see, we do not have any data yet in our stack.

Sending data to Elastic Stack

Since we configured our Logstash receive event data from the Beats, we will configure Filebeat to forward events.

We already covered how to install and configure Filebeat to forward event data in our previous guides;

Install and Configure Filebeat on CentOS 8

Install Filebeat on Fedora 30/Fedora 29/CentOS 7

Install and Configure Filebeat 7 on Ubuntu 18.04/Debian 9.8

Once you forward data to your Logstash container, the next thing you need to do is create Kibana index.

Open the menu, then go to Stack ManagementKibana > Index Patterns.

Once done, heading to Discover menu to view your data. You should now be able to see your Logstash custom fields populated.

Deploy a Single Node Elastic Stack Cluster on Docker Containers

That marks the end of our tutorial on how to deploy a single node Elastic Stack cluster on Docker Containers.

Reference

Running Elastic Stack on Docker

Other Related Tutorials

Process and Visualize ModSecurity Logs on ELK Stack

Create Kibana Visualization Dashboards for ModSecurity Logs

Deploy All-In-One OpenStack with Kolla-Ansible on Ubuntu 18.04

Install and Run MariaDB as a Docker Container

LEAVE A REPLY

Please enter your comment!
Please enter your name here