How to Setup Multinode Elasticsearch 9 Cluster

|
Published:
|
|
How to Setup Multinode Elasticsearch 9 Cluster

Follow through this tutorial to learn how to setup multinode Elasticsearch 9 cluster. As of this writing, Elastic Stack 9.0 is the current release. This means that Elasticsearch 9.0.1, one of the major components of the Elastics Stack is also the current release version as of this writing.

Setup Multinode Elasticsearch 9 Cluster

In our previous tutorial, we learnt how to setup a three node Elasticsearch 8.x cluster.

We will as well be configuring a three node Elasticsearch 9.x cluster in this tutorial.

My Environment:

  • Node 1: elk-node-01.kifarunix-demo.com
  • Node 2: elk-node-02.kifarunix-demo.com
  • Node 3: elk-node-03.kifarunix-demo.com

Ensure that the host names are resolvable on each node. If you do not have a DNS server, then you can use your hosts file.

Sample hosts file on one of the nodes:

cat /etc/hosts
127.0.0.1 localhost

# The following lines are desirable for IPv6 capable hosts
::1     ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
192.168.200.8 elk-node-01.kifarunix-demo.com elk-node-01
192.168.200.10 elk-node-02.kifarunix-demo.com elk-node-02
192.168.200.7 elk-node-03.kifarunix-demo.com elk-node-03

Install Elasticsearch 9.x on All Cluster Nodes

You need to install the same version of Elasticsearch 9.x on all the cluster nodes.

In this tutorial, we will be using Ubuntu 24.04 system.

I have already deployed a fully operational ELK stack in my first node via the guide below;

Install ELK Stack 9.x on Ubuntu 24.04

Thus, I will be installing Elasticsearch on the two additional nodes; elk-node-02 and 03.

To install Elasticsearch 9.x on Ubuntu, you need to install the Elastic APT repositories as follows;

Elevate your privileges:

sudo su -

Install Elastic repos:

apt install apt-transport-https \
	ca-certificates \
	curl \
	gnupg2 \
	software-properties-common -y
wget -qO - https://artifacts.elastic.co/GPG-KEY-elasticsearch | \
	gpg --dearmor -o /etc/apt/trusted.gpg.d/elastic.gpg
echo "deb https://artifacts.elastic.co/packages/9.x/apt stable main" > \
/etc/apt/sources.list.d/elastic-9.x.list

Run system update;

apt update

Once the repos are in place, install Elasticsearch 9.x on all the cluster nodes using the command below;

apt install elasticsearch -y

During the installation, as usual, the security features will be enabled by default;

  • Authentication and authorization are enabled.
  • TLS for the transport and HTTP layers is enabled and configured. Self-signed SSL certs are generated and used.
  • Elastic super user account (elastic) and its password is created.
  • Elasticsearch is configured as a single node cluster.

Sample installation output;

...
Setting up elasticsearch (9.0.1) ...
--------------------------- Security autoconfiguration information ------------------------------

Authentication and authorization are enabled.
TLS for the transport and HTTP layers is enabled and configured.

The generated password for the elastic built-in superuser is : wPolTfj3XN5_BpViHnyT

If this node should join an existing cluster, you can reconfigure this with
'/usr/share/elasticsearch/bin/elasticsearch-reconfigure-node --enrollment-token '
after creating an enrollment token on your existing cluster.

You can complete the following actions at any time:

Reset the password of the elastic built-in superuser with 
'/usr/share/elasticsearch/bin/elasticsearch-reset-password -u elastic'.

Generate an enrollment token for Kibana instances with 
 '/usr/share/elasticsearch/bin/elasticsearch-create-enrollment-token -s kibana'.

Generate an enrollment token for Elasticsearch nodes with 
'/usr/share/elasticsearch/bin/elasticsearch-create-enrollment-token -s node'.

-------------------------------------------------------------------------------------------------
### NOT starting on installation, please execute the following statements to configure elasticsearch service to start automatically using systemd
 sudo systemctl daemon-reload
 sudo systemctl enable elasticsearch.service
### You can start elasticsearch service by executing
 sudo systemctl start elasticsearch.service
⚠️Warning
DO NOT start Elasticsearch service yet! If one of the nodes already had ES running, that is fine. Let it run but do not start the service on additional nodes to be added to the cluster.

Configure Elasticsearch 9.x

Now that Elasticsearch is installed, proceed to configure it.

On each of the cluster nodes, open the Elasticsearch configuration file for editing.

vim /etc/elasticsearch/elasticsearch.yml

Set the Name of the Cluster on All Cluster Nodes

Optionally set the name of the cluster on each Node;


# ---------------------------------- Cluster -----------------------------------
#
# Use a descriptive name for your cluster:
#
cluster.name: kifarunix-demo-es-prod
...

Set the Node Name on Each of the cluster Nodes;

Sample config on elk-node-01:


# ------------------------------------ Node ------------------------------------
#
# Use a descriptive name for the node:
#
node.name: elk-node-01
...

Do the same on other nodes.

Define the Roles of Elasticsearch Node

As stated above, you can assign each node a respective role as master, data node, ingest node, coordinating node.. In this setup, we will configure all the three nodes to act as both master and data node to make the cluster resilient to the loss of any single node.

...
# ------------------------------------ Node ------------------------------------
#
# Use a descriptive name for the node:
#
node.name: elk-node-01
node.roles: [ master, data ]
...

Enable Memory Lock

To ensure good performance of Elasticsearch, you need to disable memory swapping by enabling memory lock. Hence, uncomment the line bootstrap.memory_lock: true. This is one of the many ways of disabling swappiness.


...
# ----------------------------------- Memory -----------------------------------
#
# Lock the memory on startup:
#
bootstrap.memory_lock: true
...

Enable Elasticsearch Inter-node communication

By default, Elasticsearch 8.x+ sets http.host: 0.0.0.0, meaning the HTTP layer (for REST API requests, port 9200) listens on all network interfaces. This allows external access to the API but does not affect inter-node communication. You can bind this to a specific interface IP if you want.

Inter-node communication uses the transport layer (port 9300). To enable it, you must configure transport.host to bind to an appropriate interface:

  • Set transport.host: 0.0.0.0 to listen on all interfaces.
  • Alternatively, specify a private interface (e.g., transport.host: 10.0.0.10) for secure, internal communication between nodes.
  • If transport.host is not set, Elasticsearch falls back to network.host (listens on loopback interface by default) for transport binding.

My config looks like this on all the nodes.

...
# Allow HTTP API connections from anywhere
# Connections are encrypted and require user authentication
http.host: 0.0.0.0

# Allow other nodes to join the cluster from anywhere
# Connections are encrypted and mutually authenticated
transport.host: 0.0.0.0

Update the rest of the nodes, accordingly.

In earlier versions (pre-8.x), network.host was the primary setting to simplify binding both HTTP and transport layers to the same host (e.g., network.host: 0.0.0.0 for all interfaces). In 8.x+, network.host is a fallback and convenience setting. It applies to both HTTP and transport.

Elasticsearch by default uses TCP port 9200 to expose REST APIs. TCP port 9300-9400 is used for node communication.

Discovery and Cluster Formation settings

There are two important discovery and cluster formation settings that should be configured before going to production so that nodes in the cluster can discover each other and elect a master node;

  • discovery.seed_hosts:
    • discovery.seed_hosts setting provides a list of master-eligible nodes in the cluster. Each value has the format host:port or host, where port defaults to the setting transport.profiles.default.port. This setting was previously known as discovery.zen.ping.unicast.hosts.

Configure this setting on all Nodes. HOWEVER, due to auto-configuration of Elasticsearch during the installation, we will need to be able to start Elasticsearch service on each node before we can join them to the cluster. As a result, we will SKIP the configuration of this setting for now.

  • cluster.initial_master_nodes:
    • cluster.initial_master_nodes setting defines the initial set of master-eligible nodes. This is important when starting an Elasticsearch cluster for the very first time. After the cluster has formed, remove this setting from each node’s configuration. The value of this setting MUST match the value of node.name.

Note that during the installation, Elasticsearch is auto-configured as a single node cluster. For example on Node 01.


...
# Create a new cluster with the current node only
# Additional nodes can still join the cluster later
cluster.initial_master_nodes: ["elk-node-01.kifarunix-demo.com"]
...

The same applies to other nodes.

Update this line such that the name of the node matches the value of node.name.

For example, in setup, this line should be;

cluster.initial_master_nodes: ["elk-node-01"]

See Discovery and Cluster Formation settings below.

Configure Elasticsearch Cluster HTTPS Connection

By default, Elasticsearch 9.x is auto-configured with self-signed SSL certitificates for both the Transport (connection between the nodes) and HTTP (HTTP API client connections, such as Kibana, Logstash, and Agents).

Later when you add other nodes to the cluster, all the security auto-configurations will be removed on those nodes being added to a cluster and certificates from first node copied over to the nodes being enrolled.

For communication between the nodes, you should see such configurations on elasticsearch.yml.


# Enable encryption and mutual authentication between cluster nodes
xpack.security.transport.ssl:
  enabled: true
  verification_mode: certificate
  keystore.path: certs/transport.p12
  truststore.path: certs/transport.p12

For HTTP API client connections, such as Kibana, Logstash, and Agents;

# Enable encryption for HTTP API client connections, such as Kibana, Logstash, and Agents
xpack.security.http.ssl:
  enabled: true
  keystore.path: certs/http.p12

Save changes made and exit the file.

Extract Elasticsearch CA Private Key

Kindly note that the CA certificate is generated and stored as /etc/elasticsearch/certs/http_ca.crt.

The CA key for the CA certificate, is stored in the /etc/elasticsearch/certs/http.p12 file. This file is password protected and the password used to protect it is found in the Elasticsearch Keystore.

Thus, to get the CA key, you first need to retrieve the password that was used to protect it by executing the command below. Note that we are running this command on the first Elasticsearch node.

/usr/share/elasticsearch/bin/elasticsearch-keystore show xpack.security.http.ssl.keystore.secure_password

The command will print the Keystore password to standard output.

Once you have the Keystore password, extract the CA key;

openssl pkcs12 -in /etc/elasticsearch/certs/http.p12 -nocerts	-nodes

You will be prompted to enter the password. Use the one retrieved above.

The command will print two keys each with a friendlyName, e.g, friendlyName: http_ca and friendlyName: http.

Bag Attributes
    friendlyName: http_ca
    localKeyID: 54 69 6D 65 20 31 37 30 30 37 35 38 35 31 39 32 33 39 
Key Attributes: <No Attributes>
-----BEGIN PRIVATE KEY-----
MIIJQQIBADANBgkqhkiG9w0BAQEFAASCCSswggknAgEAAoICAQC4y7ivLZ2UJJqp
9xKj2q5yWO6RFSXoJo92fNtaVdfu4QULNLSn540Z4nGE+pjkP1u15/H5mFzQLQQ0
...
D8B5Cf9vGrO/CAjnY87BJNRFBrehhnLNFeh1pbdEMAORibfMxtn7k9EGYRdXSdrN
Xv+Zfn4rcd4zFjtMZ8fjOZXXhansJrmBAwAX9SmFtliD96OaNhEV1+3HLScDoR/K
0FZM/3K5DrR8Ed3vWKqAtEOgi5AK
-----END PRIVATE KEY-----
Bag Attributes
    friendlyName: http
    localKeyID: 54 69 6D 65 20 31 37 30 30 37 35 38 35 31 39 32 36 32 
Key Attributes: <No Attributes>
-----BEGIN PRIVATE KEY-----
MIIJQwIBADANBgkqhkiG9w0BAQEFAASCCS0wggkpAgEAAoICAQC8kJYcWvgzcjRd
qzMagpo3Op94hNDJ2AX2gKP3V5B1kX4tlbjZxWwGLknfBA/Sz5fTkle8z/P0dVCf
...
LrXnUNXS07bOVrBhYoi7pNbrvfiGrbrZ5aInn+NVSKy7Mkav7VaiwfhxMBwhD0kj
esbwv62ZEoAziXeW95iQxvprroZgEAgUsyZJ/cHilJ4c5YIkv2en21pGcGEtoWpv
Lc00BYUVRYhNU3H1h6CRQkbnHsNB5X4=
-----END PRIVATE KEY-----

The CA key will be the one under the friendlyName: http_ca.

You can copy the key, anything between -----BEGIN PRIVATE KEY----- and -----END PRIVATE KEY----- and store them in a file of your choice, e.g /etc/elasticsearch/certs/http_ca.key.

Configure Other Important Elasticsearch Systems Settings

Disable Memory Swapping on All Cluster Nodes

Enabling memory lock as done above is on the ways of disabling swappiness. You therefore need to ensure that memory locking is enabled on the Elasticsearch service level. This can be done as follows;

[[ -d /etc/systemd/system/elasticsearch.service.d ]] || mkdir /etc/systemd/system/elasticsearch.service.d
echo -e '[Service]\nLimitMEMLOCK=infinity' > \
/etc/systemd/system/elasticsearch.service.d/override.conf

Whenever a systemd service is modified, you need to reload the systemd configurations.

systemctl daemon-reload

One of the recommended ways to disable swapping is to completely disable swap if Elasticsearch is the only service running on the server.

swapoff -a

Edit the /etc/fstab file and comment out any lines that contain the word swap;

sed -i.bak '/swap/s/^/#/' /etc/fstab

Otherwise, disable swappiness in the kernel configuration;

echo 'vm.swappiness=1' >> /etc/sysctl.conf
sysctl -p

Set JVM Heap Size on All Cluster Nodes

Elasticsearch usually sets the heap size automatically based on the role of the node. However, if you want to go with manual configuration, as a rule of thump, set Xms and Xmx to no more than 50% of your physical RAM. Any custom JVM settings should be placed under /etc/elasticsearch/jvm.options.d.

For example, to set the heap size to 1G of RAM:

echo -e "-Xms1g\n-Xmx1g" > /etc/elasticsearch/jvm.options.d/jvm.options

Set maximum Open File Descriptor on All Cluster Nodes

Set the maximum number of open files for the elasticsearch user to 65,536. This is already set by default in the, /usr/lib/systemd/system/elasticsearch.service.

less /usr/lib/systemd/system/elasticsearch.service
...
# Specifies the maximum file descriptor number that can be opened by this process
LimitNOFILE=65535
...

You also should set the maximum number of processes.

...
# Specifies the maximum number of processes
LimitNPROC=4096
...

Update Virtual Memory Settings on All Cluster Nodes

Elasticsearch uses a mmapfs directory by default to store its indices. To ensure that you do not run out of virtual memory, edit the /etc/sysctl.conf and update the value of vm.max_map_count as shown below.

vm.max_map_count=262144

You can simply run the command below to configure virtual memory settings.

echo "vm.max_map_count=262144" >> /etc/sysctl.conf

To apply the changes;

sysctl -p

General Elasticsearch Configurations so far

To this far, below is the configuration file on each node;

grep -Ev '^#|^$' /etc/elasticsearch/elasticsearch.yml

Node 01;

cluster.name: kifarunix-demo-es-prod
node.name: elk-node-01
path.data: /var/lib/elasticsearch
path.logs: /var/log/elasticsearch
bootstrap.memory_lock: true
node.roles: [ master, data ]
xpack.security.enabled: true
xpack.security.enrollment.enabled: true
xpack.security.http.ssl:
  enabled: true
  keystore.path: certs/http.p12
xpack.security.transport.ssl:
  enabled: true
  verification_mode: certificate
  keystore.path: certs/transport.p12
  truststore.path: certs/transport.p12
cluster.initial_master_nodes: ["elk-node-01"]
http.host: 0.0.0.0
transport.host: 0.0.0.0

Node 02

cluster.name: kifarunix-demo-es-prod
node.name: elk-node-02
node.roles: [ master, data ]
path.data: /var/lib/elasticsearch
path.logs: /var/log/elasticsearch
bootstrap.memory_lock: true
xpack.security.enabled: true
xpack.security.enrollment.enabled: true
xpack.security.http.ssl:
  enabled: true
  keystore.path: certs/http.p12
xpack.security.transport.ssl:
  enabled: true
  verification_mode: certificate
  keystore.path: certs/transport.p12
  truststore.path: certs/transport.p12
cluster.initial_master_nodes: ["elk-node-02"]
http.host: 0.0.0.0
transport.host: 0.0.0.0

Node 03

cluster.name: kifarunix-demo-es-prod
node.roles: [ master, data ]
node.name: elk-node-03
path.data: /var/lib/elasticsearch
path.logs: /var/log/elasticsearch
bootstrap.memory_lock: true
xpack.security.enabled: true
xpack.security.enrollment.enabled: true
xpack.security.http.ssl:
  enabled: true
  keystore.path: certs/http.p12
xpack.security.transport.ssl:
  enabled: true
  verification_mode: certificate
  keystore.path: certs/transport.p12
  truststore.path: certs/transport.p12
cluster.initial_master_nodes: ["elk-node-03"]
http.host: 0.0.0.0
transport.host: 0.0.0.0

Start and Enable Elasticsearch Service on Node 01

For now, start and enable Elasticsearch service to run on system boot on Node 01 ONLY.

systemctl enable --now elasticsearch

If it was already running, restart it!

systemctl restart elasticsearch

Confirm that Elasticsearch is running;

systemctl status elasticsearch

The password was in the installation output;

● elasticsearch.service - Elasticsearch
     Loaded: loaded (/usr/lib/systemd/system/elasticsearch.service; enabled; preset: enabled)
    Drop-In: /etc/systemd/system/elasticsearch.service.d
             └─override.conf
     Active: active (running) since Sun 2025-05-18 17:19:34 UTC; 6s ago
       Docs: https://www.elastic.co
   Main PID: 5722 (java)
      Tasks: 110 (limit: 9440)
     Memory: 4.7G (peak: 4.7G)
        CPU: 56.539s
     CGroup: /system.slice/elasticsearch.service
             ├─5722 /usr/share/elasticsearch/jdk/bin/java -Xms4m -Xmx64m -XX:+UseSerialGC -Dcli.name=server -Dcli.script=/usr/share/elasticsearch/bin/elasticsearch -Dcli.libs=lib/tools/server-cli -Des.path>
             ├─5790 /usr/share/elasticsearch/jdk/bin/java -Des.networkaddress.cache.ttl=60 -Des.networkaddress.cache.negative.ttl=10 -XX:+AlwaysPreTouch -Xss1m -Djava.awt.headless=true -Dfile.encoding=UTF->
             └─5813 /usr/share/elasticsearch/modules/x-pack-ml/platform/linux-x86_64/bin/controller

May 18 17:19:19 elk-node-01.kifarunix-demo.com systemd[1]: Starting elasticsearch.service - Elasticsearch...
May 18 17:19:34 elk-node-01.kifarunix-demo.com systemd[1]: Started elasticsearch.service - Elasticsearch.

Confirm the ports are opened.

ss -altnp | grep -iE '92|93'
LISTEN 0      511         192.168.200.8:5601      0.0.0.0:*    users:(("node",pid=1791,fd=96))                       
LISTEN 0      4096                    *:9300            *:*    users:(("java",pid=5790,fd=551))                      
LISTEN 0      4096                    *:9200            *:*    users:(("java",pid=5790,fd=553))
curl -k -u elastic https://elk-node-01:9200
Enter host password for user 'elastic':
{
  "name" : "elk-node-01",
  "cluster_name" : "kifarunix-demo-es-prod",
  "cluster_uuid" : "B7e6dOg6QviRnn3PkW0bSQ",
  "version" : {
    "number" : "9.0.1",
    "build_flavor" : "default",
    "build_type" : "deb",
    "build_hash" : "73f7594ea00db50aa7e941e151a5b3985f01e364",
    "build_date" : "2025-04-30T10:07:41.393025990Z",
    "build_snapshot" : false,
    "lucene_version" : "10.1.0",
    "minimum_wire_compatibility_version" : "8.18.0",
    "minimum_index_compatibility_version" : "8.0.0"
  },
  "tagline" : "You Know, for Search"
}

Enroll Other Nodes into Elasticsearch Cluster

At this point, Elasticsearch is running on Node 01 ONLY.

Generate Elasticsearch Cluster Node Enrollment Token

Next, you need to generate Elasticsearch cluster enrollment token. Do this only on a single node where ES is already started.

In this setup, we will generate Elasticsearch cluster enrollment token on ES Node 01 ONLY since we have started Elasticsearch service on this node.

/usr/share/elasticsearch/bin/elasticsearch-create-enrollment-token -s node

Sample token;

eyJ2ZXIiOiI4LjE0LjAiLCJhZHIiOlsiMTkyLjE2OC4yMDAuODo5MjAwIl0sImZnciI6Ijc2MTlkZGRlMTEwN2MzODA0MWU3NGJlOWQyYzVlNDdiYjc0YTNjNGMyMGQzMDlhZmU5MmJkYzBkNzlhYjFkZDQiLCJrZXkiOiI2RFZyNUpZQlVqU19CUEZYUXFrRjpvSXlVanc0Q2xvU0s0d21QUGJJSW1RIn0=

Once you have the token, enroll other nodes.

Enroll Other Nodes into Cluster using Enrollment Token

ENSURE that the cluster transport ports (9300/tcp) are opened on the firewall on each node to allow cluster formation.

Login to and enroll Elasticsearch Node 02;

/usr/share/elasticsearch/bin/elasticsearch-reconfigure-node --enrollment-token <PASTE TOKEN ABOVE> 

For example;

/usr/share/elasticsearch/bin/elasticsearch-reconfigure-node --enrollment-token eyJ2ZXIiOiI4LjE0LjAiLCJhZHIiOlsiMTkyLjE2OC4yMDAuODo5MjAwIl0sImZnciI6Ijc2MTlkZGRlMTEwN2MzODA0MWU3NGJlOWQyYzVlNDdiYjc0YTNjNGMyMGQzMDlhZmU5MmJkYzBkNzlhYjFkZDQiLCJrZXkiOiI2RFZyNUpZQlVqU19CUEZYUXFrRjpvSXlVanc0Q2xvU0s0d21QUGJJSW1RIn0=

The command will ask if you want to reconfigure the node. Say yes and proceed.


This node will be reconfigured to join an existing cluster, using the enrollment token that you provided.
This operation will overwrite the existing configuration. Specifically: 
  - Security auto configuration will be removed from elasticsearch.yml
  - The [certs] config directory will be removed
  - Security auto configuration related secure settings will be removed from the elasticsearch.keystore
Do you want to continue with the reconfiguration process [y/N]y

Similarly, run the same enrollment command on other nodes.

For example, login to node 3 and run the same enrollment command above.

Start Elasticsearch on Other Nodes

Once the enrollment is done, start the Elasticsearch service on the node.

systemctl enable --now elasticsearch

Discovery and Cluster Formation settings

If you noticed, the enrollment command re-configures the cluster initial nodes setting, cluster.initial_master_nodes, on other nodes that are enrolled into the cluster.

Also discovery.seed_hosts: is configured with the address of the first node, where you generated the enrollment command. You will have to update this line accordingly. See below in the configs.

Now, we need to configure the new nodes, Node 02 and Node 03 in this setup, as we did with Node 01.

So the only change you need to make on each of the nodes is to define the list of the cluster nodes, beginning on the first node.

Therefore, on Node 01:

vim /etc/elasticsearch/elasticsearch.yml

Basically, you need to define:

  • cluster members (discovery.seed_hosts: [“elk-node-01”, “elk-node-02”, “elk-node-03”])
  • list of initial set of master-eligible nodes (cluster.initial_master_nodes: [“elk-node-01”, “elk-node-02”, “elk-node-03”]).
# --------------------------------- Discovery ----------------------------------
#
# Pass an initial list of hosts to perform discovery when this node is started:
# The default list of hosts is ["127.0.0.1", "[::1]"]
#
#discovery.seed_hosts: ["host1", "host2"]
discovery.seed_hosts: ["elk-node-01", "elk-node-02", "elk-node-03"]
#
# Bootstrap the cluster using an initial set of master-eligible nodes:
#
#cluster.initial_master_nodes: ["node-1", "node-2"]
cluster.initial_master_nodes: ["elk-node-01", "elk-node-02", "elk-node-03"]
#
# For more information, consult the discovery and cluster formation module documentation.

Save and exit the configuration file.

Comment out other cluster.initial_master_nodes: line down the configuration file.

At the end of it all, this is how my configuration on the first Elasticsearch node 01 looks like;

grep -Ev '^#|^$' /etc/elasticsearch/elasticsearch.yml
cluster.name: kifarunix-demo-es-prod
node.name: elk-node-01
path.data: /var/lib/elasticsearch
path.logs: /var/log/elasticsearch
bootstrap.memory_lock: true
node.roles: [ master, data ]
discovery.seed_hosts: ["elk-node-01", "elk-node-02", "elk-node-03"]
cluster.initial_master_nodes: ["elk-node-01", "elk-node-02", "elk-node-03"]
xpack.security.enabled: true
xpack.security.enrollment.enabled: true
xpack.security.http.ssl:
  enabled: true
  keystore.path: certs/http.p12
xpack.security.transport.ssl:
  enabled: true
  verification_mode: certificate
  keystore.path: certs/transport.p12
  truststore.path: certs/transport.p12
http.host: 0.0.0.0
transport.host: 0.0.0.0

Restart Elasticsearch Service.

systemctl restart elasticsearch

Confirm that the service is running;

systemctl status elasticsearch

Configure Node 02 just like how Node 01 has been configured. Below is our sample Node 02 configuration;

grep -Ev '^#|^$' /etc/elasticsearch/elasticsearch.yml
cluster.name: kifarunix-demo-es-prod
node.name: elk-node-02
node.roles: [ master, data ]
path.data: /var/lib/elasticsearch
path.logs: /var/log/elasticsearch
bootstrap.memory_lock: true
cluster.initial_master_nodes: ["elk-node-01", "elk-node-02", "elk-node-03"]
discovery.seed_hosts: ["elk-node-01", "elk-node-02", "elk-node-03"]
xpack.security.enabled: true
xpack.security.enrollment.enabled: true
xpack.security.http.ssl:
  enabled: true
  keystore.path: certs/http.p12
xpack.security.transport.ssl:
  enabled: true
  verification_mode: certificate
  keystore.path: certs/transport.p12
  truststore.path: certs/transport.p12
http.host: 0.0.0.0
transport.host: 0.0.0.0

Restart Elasticsearch Service.

systemctl restart elasticsearch

Confirm that the service is running:

systemctl status elasticsearch

Configure Node 03 as well. Below is our sample Node 03 configuration;

grep -Ev '^#|^$' /etc/elasticsearch/elasticsearch.yml
cluster.name: kifarunix-demo-es-prod
node.roles: [ master, data ]
node.name: elk-node-03
path.data: /var/lib/elasticsearch
path.logs: /var/log/elasticsearch
bootstrap.memory_lock: true
discovery.seed_hosts: ["elk-node-01", "elk-node-02", "elk-node-03"]
cluster.initial_master_nodes: ["elk-node-01", "elk-node-02", "elk-node-03"]
xpack.security.enabled: true
xpack.security.enrollment.enabled: true
xpack.security.http.ssl:
  enabled: true
  keystore.path: certs/http.p12
xpack.security.transport.ssl:
  enabled: true
  verification_mode: certificate
  keystore.path: certs/transport.p12
  truststore.path: certs/transport.p12
http.host: 0.0.0.0
transport.host: 0.0.0.0

Restart Elasticsearch Service.

systemctl restart elasticsearch

Verify that the service is running;

systemctl status elasticsearch

Check the Cluster Nodes Status

curl -k -XGET "https://elk-node-01:9200/_cat/nodes?v" -u elastic
Enter host password for user 'elastic':
ip             heap.percent ram.percent cpu load_1m load_5m load_15m node.role master name
192.168.200.10           19          85   7    0.28    0.16     0.06 dm        -      elk-node-02
192.168.200.8            23          95   8    0.29    0.36     0.23 dm        *      elk-node-01
192.168.200.7            22          86  25    0.52    0.17     0.05 dm        -      elk-node-03

And there you go. You now have Elasticsearch 9.x cluster. From the output above, node 01 is currently the cluster master.

Check cluster health status;

curl -k -XGET "https://elk-node-01:9200/_cat/health?v" -u elastic
Enter host password for user 'elastic':
epoch      timestamp cluster                status node.total node.data shards pri relo init unassign unassign.pri pending_tasks max_task_wait_time active_shards_percent
1747593982 18:46:22  kifarunix-demo-es-prod green           3         3     76  38    0    0        0            0             0                  -                100.0%

As you can see, our cluster status is GREEN!!

Remove initial_master_nodes Configuration

Once the cluster is formed as above, remove or comment the following line on ALL the nodes.

cluster.initial_master_nodes: ["es-node01", "es-node02", "es-node03"]

You can remove the configuration on Elasticsearch configuration across all the cluster nodes.

Run the command below to update the configuration.

sed -i.bak '/cluster.initial_master_nodes/s/^/#/' /etc/elasticsearch/elasticsearch.yml

Conclusion

Your multinode Elasticsearch cluster is up and running, ready to power fast, scalable search and analytics. Keep an eye on performance and tweak as needed—your cluster’s set for action.

Other Tutorials

Deploy a Single Node Elastic Stack Cluster on Docker Containers

Setup Multi-node Elasticsearch Cluster

SUPPORT US VIA A VIRTUAL CUP OF COFFEE

We're passionate about sharing our knowledge and experiences with you through our blog. If you appreciate our efforts, consider buying us a virtual coffee. Your support keeps us motivated and enables us to continually improve, ensuring that we can provide you with the best content possible. Thank you for being a coffee-fueled champion of our work!

Photo of author
Kifarunix
Linux Certified Engineer, with a passion for open-source technology and a strong understanding of Linux systems. With experience in system administration, troubleshooting, and automation, I am skilled in maintaining and optimizing Linux infrastructure.

Leave a Comment