Resolve the Elasticsearch 7.X insufficient number of slices issue
Problem Analysis
This is a Logstash request to ES to create a new index and the following error message appears in the Logstash log.
[2021-01-11T13:23:52,381][WARN ][logstash.outputs.elasticsearch][main][08029a8bd56dc10a64b84e502acbac75 |
The error message can be found in the logs as follows.
Failed: 1: this action would add [2] total shards, but this cluster currently has [1000]/[1000] maximum |
The current maximum number of ES slices is only 1000 and it is full, but at this time ES needs 2 more slices when creating new indexes. That means ES does not have enough slices.
Problem solving
Starting from Elasticsearch version 7.X, each node is allowed to have only 1000 slices by default, so the above error is caused by insufficient number of slices in the cluster.
Modify the number of ES slices
- Modify the number of node (cluster) slices through the configuration file
elasticsearch.yml
, which requires a restart of the service. (permanent effect)
cluster.max_shards_per_node: 5000 |
2、Modify the number of slices by curl command
Cluster update API has two working modes.
Temporary (Transient)
These changes remain in effect until the cluster is restarted. Once the entire cluster is restarted, these configurations are cleared.
Permanent (Persistent)
These changes are permanent until explicitly modified. Even if the whole cluster is restarted they survive and overwrite the options in the static configuration file.
Temporary or permanent configuration needs to be specified separately in the JSON body.
Temporary effect
curl -XPUT -H "Content-Type:application/json" http://localhost:9200/_cluster/settings -d '{ "transient": { "cluster": { "max_shards_per_node": 5000 } } }' |
- Permanent effect
curl -XPUT -H "Content-Type:application/json" http://localhost:9200/_cluster/settings -d '{ "persistent": { "cluster": { "max_shards_per_node": 5000 } } }' |
View the number of splits
$ curl -XGET http://localhost:9200/_cluster/settings?pretty |
Reference