Running Multiple Nodes
Even though the lab instructions are originally designed to be executed within
our virtual environment, it is possible to run the labs on your local machine.
In this tutorial we show how to run multiple nodes on your local machine and
how to start multiple nodes with different settings.
We are assuming that you have downloaded and unzipped both Elasticsearch and
Throughout the labs you will be asked to start multiple Elasticsearch instances
on different severs. You can simulate this behavior through
node.max_local_storage_nodes setting, which allows one Elasticsearch
distribution to share the same data path across multiple instances that are
running on the same machine. Note that this configuration should NOT be used in
production, though it is a nice one for development purposes, especially when
you have limited resources (e.g., only one machine for development and testing).
To run a cluster with 3 Elasticsearch nodes, edit your
elasticsearch/config) using a text editor of your choice and
This configuration will allow you to run up to
3 Elasticsearch instances
on the same machine.
Now we will show how to start up your first instance. To do that, open a
terminal and change to the directory where you downloaded and unzipped
Elasticsearch. Then use the following command to start up a node called
./elasticsearch-7.3.1/bin/elasticsearch -E node.name=node1
This command uses the option
-E for setting the parameter
node1. To start other two instances is really simple:
you just need to use the same command, but with another node name.
On a new terminal start up a node called
node2 with the following
./elasticsearch-7.3.1/bin/elasticsearch -E node.name=node2
On a new terminal start up a node called
node3 with the following
./elasticsearch-7.3.1/bin/elasticsearch -E node.name=node3
You can check your 3 node cluster with
curl and the
_cat/nodes end point:
curl -X GET http://localhost:9200/_cat/nodes?v
After running the command above, you should see something similar to what is shown below:
ip heap.percent ram.percent cpu load_1m load_5m load_15m node.role master name 127.0.0.1 36 99 16 2.58 mdi - node2 127.0.0.1 31 99 17 2.58 mdi - node3 127.0.0.1 35 99 15 2.58 mdi * node1
You can also use Kibana to check your 3 node cluster. To do that, you just need
to open a new terminal, change to the directory where you downloaded and
unzipped Kibana, start it up, access
Console, and then run
Throughout the labs you will be asked to start up your nodes with different settings. Some are valid for all the nodes, some are node specific, and some will not be necessary because it is fine to use the default settings, as we are running all instances on the same machine. Let’s see how it works for each one of them.
You should set those settings that are valid for all the nodes in the
elasticsearch.yml file. A setting is valid for all the nodes when you
have to edit
elasticsearch.yml with the same value on all the nodes.
For instance, the
cluster.name is the same to all nodes and, therefore,
could be added to the
You should use the command line option
-E to set those settings that
are node specific. A setting is node specific when you have to edit
elasticsearch.yml with a different value on each node. For instance,
in this tutorial you used
-E to set
node.name, though in the
labs you would do this configuration inside the
This means that you can use one
-E to each specific setting you need
to set while starting your instances. For example, let’s say you want to start
a node named
node3 that is not master eligible. You can do that using
the following command:
./elasticsearch-7.3.1/bin/elasticsearch -E node.name=node3 -E node.master=false
And there are also some settings that are not necessary at all while running all
the instances on the same machine. In this case it is fine to simply use the
default value by just not setting them up. For instance,
discovery.seed_hosts do not need to be set, because you will
localhost (which is the default) for both settings.
You also don’t need to set
cluster.initial_master_nodes because the
cluster is running with a completely default configuration, and in this case it
will automatically bootstrap the cluster based on the nodes that can be
discovered to be running on the same host within a short time after startup.
Throughout the labs you might be asked to enable Elastic Security.
You can follow the instructions to generate a certificate and private key,
but you can skip the steps for copying certificates among nodes, as you only
localhost. Then you will be asked to set both
true, as well as the certificate verification mode and path.
Note that these settings are valid for all the nodes, so you will need to add
them to the
elasticsearch.yml file. After that you can stop all your
Elasticsearch instances, and also stop Kibana. Finally, you can start up again
your nodes, create your credentials, set up your Kibana passwords, and start up