Accessing AWS ElastiCache (Redis CLUSTER mode) from different AWS accounts via AWS PrivateLink Accessing AWS ElastiCache (Redis CLUSTER mode) from different AWS accounts via AWS PrivateLink kubernetes kubernetes

Accessing AWS ElastiCache (Redis CLUSTER mode) from different AWS accounts via AWS PrivateLink


So turns out the issue was due to how redis-py-cluster manages host and port.

When a new redis-py-cluster object is created it gets a list of host IPs from the Redis server(i.e. Redis cluster host IPs form account A), after which the client tries to connect to the new host and ports.

In normal cases, it works as the initial host and the IP from the response are one and the same.(i.e. the host and port added at the time of object creation)

In our case, the object creation host and port are obtained from the DNS name from the Endpoint service of Account B.

It leads to the code trying to access the actual IP from account A instead of the DNS name from account B.

The issue was resolved using Host port remapping, here we bound the IP returned from the Redis server from Account A with IP Of Account B's endpoints services DNA name.


Based on your comment:

this was not possible because of VPCs in Account-A and Account-B had the same CIDR range. Peered VPCs can’t have the same CIDR range.

I think what you are looking for is impossible. Routing within a VPC always happens first - it happens before any route tables are considered at all. Said another way, if the destination of the packet lies within the sending VPC it will never leave that VPC because AWS will try routing it within its own VPC, even if the IP isn't in use at that time in the VPC.

So, if you are trying to communicate with a another VPC which has the same IP range as yours, even if you specifically put a route to egress traffic to a different IP (but in the same range), the rule will be silently ignored and AWS will try to deliver the packet in the originating VPC, which seems like it is not what you are trying to accomplish.