CouchBase Client Fails to Cache Object less than 20mb. Time Out Error

给你一囗甜甜゛ 提交于 2019-12-07 09:25:09

问题


I am caching my serialized POJO s(4mb-8mb size objects) concurrently into Couchbase server with CouchBase client (couchbase-client-1.4.3).

for(upto 20 itertarions){  
    new Thread().start().. //this thread cache the objects  
    Thread.sleep(500); //   the less sleep time, the more cache failures :(
}

I have 2 replicated servers. The client can cach small size objects, but when the object size increases, it throws exceptions.

Caused by: net.spy.memcached.internal.CheckedOperationTimeoutException: Timed out waiting for operation - failing node: 192.168.0.1/192.168.0.2:11210
    at net.spy.memcached.internal.OperationFuture.get(OperationFuture.java:167)
    at net.spy.memcached.internal.OperationFuture.get(OperationFuture.java:140)`

I found similar questions and answers. How ever, I am not a position to upgrade my memory as the applications which use the couchbase client have their concerns of memory. How ever I tried adding JVM arguments such as -XX:+UseConcMarkSweepGC -XX:MaxGCPauseMillis=500

This is how I create couchBase cache client

CouchbaseConnectionFactoryBuilder cfb = new CouchbaseConnectionFactoryBuilder();
        cfb.setFailureMode(FailureMode.Retry);
        cfb.setMaxReconnectDelay(5000); 
        cfb.setOpTimeout(15000);
        cfb.setOpQueueMaxBlockTime(10000);
        client=new CouchbaseClient(cfb.buildCouchbaseConnection(
            uris,BUCKET_TYPE,PASSWORD));

I tried with maximum time gaps to make caching successful and avoid time outs.But it also doesn't work.In our real live applications, usual 7 or 8 caches can happen within a second. The applications cannot hold the process until the cache happens successfully.(it if waits, then caching is useless because of its time consumption. Getting direct Database is always cheaper!!!)

Pleas any one let me know how can improve my couchbase client(since I have limitations of hardware and JVM limitations, I am looking a way to improve the client) to avoid such time outs and improve the performance? Cannot I do serializations compressions out of the couchbase client and do it myself ?

Updated: My COUCHBASE Setup.
- I am caching serialized object which has 5 to 10mb sizes.
-I have 2 nodes in difference machines.
-Each pc has 4GB RAM. CPUs are: one pc is 2 cores and other 4 cores. (Is it not enough ? )
-The client application runs in the pc which has 4 cores.
-I just configured a LAN for this testing.
-Both OS are ubuntu 14. one pc is 32 bit another one 64bit.
-The couchbase version is Latest Community Edition couchbase-server-community_2.2.0_x86_64 .(Is this buggy? :( )
- Couchbase client Couchbase-Java-Client-1.4.3
- There are 100 threads starts with a 500ms gap. Each thread cache into CB.

Also, I checked the System monitoring. The PC which the CB node and the client runs shows more CPU and RAM usage but, the other replicated PC (less hardware features) does not show much hardware usage and it is normal.

EDIT: Can this happens because of the client side issue or the CB server ? Any idea please ?

来源:https://stackoverflow.com/questions/25181981/couchbase-client-fails-to-cache-object-less-than-20mb-time-out-error

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!