Replicated caching solutions compatible with AWS

核能气质少年 提交于 2021-02-08 09:52:43

问题


My use case is as follow:

We have about 500 servers running in an autoscaling EC2 cluster that need to access the same configuration data (layed out in a key/value fashion) several million times per second.

The configuration data isn't very large (1 or 2 GBs) and doesn't change much (a few dozen updates/deletes/inserts per minute during peak time).

Latency is critical for us, so the data needs to be replicated and kept in memory on every single instance running our application.

Eventual consistency is fine. However we need to make sure that every update will be propagated at some point. (knowing that the servers can be shutdown at any time) The update propagation across the servers should be reliable and easy to setup (we can't have static IPs for our servers, or we don't wanna go the route of "faking" multicast on AWS etc...)

Here are the solutions we've explored in the past:

  • Using regular java maps and use our custom built system to propagate updates across the cluster. (obviously, it doesn't scale that well)
  • Using EhCache and its replication feature. But setting it up on EC2 is very painful and somehow unreliable.

Here are the solutions we're thinking of trying out:

  • Apache Ignite (https://ignite.apache.org/) with a REPLICATED strategy.
  • Hazelcast's Replicated Map feature. (http://docs.hazelcast.org/docs/latest/manual/html-single/index.html#replicated-map)
  • Apache Geode on every application node. (http://geode.apache.org/)

I would like to know if each of those solutions would work for our use case. And eventually, what issues I'm likely to face with each of them.

Here is what I found so far:

  • Hazelcast's Replicated Map is somehow recent and still a bit unreliable (async updates can be lost in case of scaling down)
  • It seems like Geode became "stable" fairly recently (even though it's supposedly in development since the early 2000s)
  • Ignite looks like it could be a good fit, but I'm not too sure how their S3 discovery based system will work out if we keep adding / removing node regularly.

Thanks!


回答1:


Geode should work for your use case. You should be able to use a Geode Replicated region on each node. You can choose to do synchronous OR asynchronous replication. In case of failures, the replicated region gets an initial copy of the data from an existing member in the system, while making sure that no in-flight operations are lost.

In terms of configuration, you will have to start a couple/few member discovery processes (Geode locators) and point each member to these locators. (We recommend that you start one locator/AZ and use 3 AZs to protect against network partitioning).

Geode/GemFire has been stable for a while; powering low latency high scalability requirements for reservation systems at Indian and Chinese railways among other users for a very long time.

Disclosure: I am a committer on Geode.




回答2:


Ignite provides native AWS integration for discovery over S3 storage: https://apacheignite-mix.readme.io/docs/amazon-aws. It solves main issue - you don't need to change configuration when instances are restarted. In a nutshell, any nodes that successfully joins topology writes its coordinates to a bucket (and removes them when fails or leaves). When you start a new node, it reads this bucket and connects to one the listed addresses.




回答3:


Hazelcast's Replicated Map will not work for your use-case. Note that it is a map that is replicated across all it's nodes not on the client nodes/servers. Also, as you said, it is not fully reliable yet.
Here is the Hazelcast solution:

  1. Create a Hazelcast cluster with a set of nodes depending upon the size of data.
  2. Create a Distributed map(IMap) and tweak the count & eviction configurations based on size/number of key/value pairs. The data gets partitioned across all the nodes.
  3. Setup Backup count based on how critical the data is and how much time it takes to pull the data from the actual source(DB/Files). Distributed maps have 1 backup by default.
  4. In the client side, setup a NearCache and attach it to the Distributed map. This NearCache will hold the Key/Value pair in the local/client side itself. So the get operations would end up in milliseconds.

Things to consider with NearCache solution:

  • The first get operation would be slower as it has to go through network to get the data from cluster.
  • Cache invalidation is not fully reliable as there will be a delay in synchronization with the cluster and may end reading stale data. Again, this is same case across all the cache solutions.
  • It is client's responsibility to setup timeout and invalidation of Nearcache entries. So that the future pulls would get fresh data from cluster. This depends on how often the data gets refreshed or value is replaced for a key.


来源:https://stackoverflow.com/questions/42237910/replicated-caching-solutions-compatible-with-aws

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!