redis cluster in Kubernetes doesn't write nodes.conf file
Looks to be an issue with the shell script that is mounted from configmap. can you update as below
fix-ip.sh: | #!/bin/sh CLUSTER_CONFIG="/data/nodes.conf" echo "creating nodes" if [ -f ${CLUSTER_CONFIG} ]; then echo "[ INFO ]File:${CLUSTER_CONFIG} is Found" else touch $CLUSTER_CONFIG fi if [ -z "${POD_IP}" ]; then echo "Unable to determine Pod IP address!" exit 1 fi echo "Updating my IP to ${POD_IP} in ${CLUSTER_CONFIG}" sed -i.bak -e "/myself/ s/[0-9]\{1,3\}\.[0-9]\{1,3\}\.[0-9]\{1,3\}\.[0-9]\{1,3\}/${POD_IP}/" ${CLUSTER_CONFIG} echo "done" exec "$@"
I just deployed with the updated script and it worked. see below the output
master $ kubectl get poNAME READY STATUS RESTARTS AGEredis-cluster-0 1/1 Running 0 83sredis-cluster-1 1/1 Running 0 54sredis-cluster-2 1/1 Running 0 45sredis-cluster-3 1/1 Running 0 38sredis-cluster-4 1/1 Running 0 31sredis-cluster-5 1/1 Running 0 25smaster $ kubectl exec -it redis-cluster-0 -- redis-cli --cluster create --cluster-replicas 1 $(kubectl getpods -l app=redis-cluster -o jsonpath='{range.items[*]}{.status.podIP}:6379 ')>>> Performing hash slots allocation on 6 nodes...Master[0] -> Slots 0 - 5460Master[1] -> Slots 5461 - 10922Master[2] -> Slots 10923 - 16383Adding replica 10.40.0.4:6379 to 10.40.0.1:6379Adding replica 10.40.0.5:6379 to 10.40.0.2:6379Adding replica 10.40.0.6:6379 to 10.40.0.3:6379M: 9984141f922bed94bfa3532ea5cce43682fa524c 10.40.0.1:6379 slots:[0-5460] (5461 slots) masterM: 76ebee0dd19692c2b6d95f0a492d002cef1c6c17 10.40.0.2:6379 slots:[5461-10922] (5462 slots) masterM: 045b27c73069bff9ca9a4a1a3a2454e9ff640d1a 10.40.0.3:6379 slots:[10923-16383] (5461 slots) masterS: 1bc8d1b8e2d05b870b902ccdf597c3eece7705df 10.40.0.4:6379 replicates 9984141f922bed94bfa3532ea5cce43682fa524cS: 5b2b019ba8401d3a8c93a8133db0766b99aac850 10.40.0.5:6379 replicates 76ebee0dd19692c2b6d95f0a492d002cef1c6c17S: d4b91700b2bb1a3f7327395c58b32bb4d3521887 10.40.0.6:6379 replicates 045b27c73069bff9ca9a4a1a3a2454e9ff640d1aCan I set the above configuration? (type 'yes' to accept): yes>>> Nodes configuration updated>>> Assign a different config epoch to each node>>> Sending CLUSTER MEET messages to join the clusterWaiting for the cluster to join....>>> Performing Cluster Check (using node 10.40.0.1:6379)M: 9984141f922bed94bfa3532ea5cce43682fa524c 10.40.0.1:6379 slots:[0-5460] (5461 slots) master 1 additional replica(s)M: 045b27c73069bff9ca9a4a1a3a2454e9ff640d1a 10.40.0.3:6379 slots:[10923-16383] (5461 slots) master 1 additional replica(s)S: 1bc8d1b8e2d05b870b902ccdf597c3eece7705df 10.40.0.4:6379 slots: (0 slots) slave replicates 9984141f922bed94bfa3532ea5cce43682fa524cS: d4b91700b2bb1a3f7327395c58b32bb4d3521887 10.40.0.6:6379 slots: (0 slots) slave replicates 045b27c73069bff9ca9a4a1a3a2454e9ff640d1aM: 76ebee0dd19692c2b6d95f0a492d002cef1c6c17 10.40.0.2:6379 slots:[5461-10922] (5462 slots) master 1 additional replica(s)S: 5b2b019ba8401d3a8c93a8133db0766b99aac850 10.40.0.5:6379 slots: (0 slots) slave replicates 76ebee0dd19692c2b6d95f0a492d002cef1c6c17[OK] All nodes agree about slots configuration.>>> Check for open slots...>>> Check slots coverage...[OK] All 16384 slots covered.master $ kubectl exec -it redis-cluster-0 -- redis-cli cluster infocluster_state:okcluster_slots_assigned:16384cluster_slots_ok:16384cluster_slots_pfail:0cluster_slots_fail:0cluster_known_nodes:6cluster_size:3cluster_current_epoch:6cluster_my_epoch:1cluster_stats_messages_ping_sent:61cluster_stats_messages_pong_sent:76cluster_stats_messages_sent:137cluster_stats_messages_ping_received:71cluster_stats_messages_pong_received:61cluster_stats_messages_meet_received:5cluster_stats_messages_received:137master $ for x in $(seq 0 5); do echo "redis-cluster-$x"; kubectl exec redis-cluster-$x -- redis-cli role;echo; doneredis-cluster-0master58810.40.0.46379588redis-cluster-1master60210.40.0.56379602redis-cluster-2master58810.40.0.66379588redis-cluster-3slave10.40.0.16379connected602redis-cluster-4slave10.40.0.26379connected602redis-cluster-5slave10.40.0.36379connected588