memcached

Do client need to worry about multiple memcache servers?

放肆的年华 提交于 2019-12-21 22:27:04
问题 Question:- Does java client need to worry about multiple servers ? Meaning:- I have given two servers in memcached client, but when i set or get a key from cache, do i need to provide any server related info to it or memcache itself takes care of it?? My knowledge:- Memcache itself takes care due to consistent hashing. but does spymemcached 2.8.0 provides consistent hashing??? 回答1: Memcached servers are pooling servers. Meaning that you define a pool (a list) of servers and when the Java

GAE Memcache Usage for NDB Seems Low

随声附和 提交于 2019-12-21 21:40:22
问题 I have a Google App Engine project with a ~40 GB database, and I'm getting poor read performance with NDB. I've noticed that my memcache size (as listed on the dashboard) is only about 2 MB. I would expect NDB to implicitly make more use of memcache to improve performance. Is there a way of debugging NDB's memcache usage? 回答1: The question is rather poorly formulated -- there are a zillion reasons for poor read performance, and most are due to a poorly written app, but you don't tell us

Django caching a large list

妖精的绣舞 提交于 2019-12-21 17:18:29
问题 My django application deals with 25MB binary files. Each of them has about 100,000 "records" of 256 bytes each. It takes me about 7 seconds to read the binary file from disk and decode it using python's struct module. I turn the data into a list of about 100,000 items, where each item is a dictionary with values of various types (float, string, etc.). My django views need to search through this list. Clearly 7 seconds is too long. I've tried using django's low-level caching API to cache the

Using memcache inside Google Compute Engine with PHP

陌路散爱 提交于 2019-12-21 12:34:20
问题 I am trying to test using App Engine's Memcache with our servers running under Compute Engine. Currently we just have a couple VM instances which run Memcache where we call: $memcache->addServer('memcache', 11211); to reference each server. Looking at Google's sample code, it doesn't mention anything about what server we should call. I tried to test the below code from their document but it errors on creating the object. I understand that I might have to include a class, but it didn't mention

Simple Java caching library or design pattern?

心不动则不痛 提交于 2019-12-21 08:16:50
问题 I need to frequently access the result of a time-consuming calculation. The result changes infrequently, so I have to recalculate the data from time to time but it is ok to use the outdated result for a while. What would be the easiest way to do this and is there an existing library method or design pattern? I am thinking of something like private static List myCachedList = null; ... // refresh list once in 3600 seconds if (needsRefresh(myCachedList, 3600)) { // run the calculation

Memcached学习(一)--网络模型

眉间皱痕 提交于 2019-12-21 07:12:00
1、Memcached的网络模型   Memcached的网络模型是基于Libevent网络库开发的,同时Memcached采用多线程的工作方式,工作线程和主线程之间采用pipe进行通信。Memcached的网络线程模型主要涉及两个主要文件: memcached.c 和 thread.c 文件。 Memcached的网络模型流程大致如下: 1、memcached会在main函数中创建主线程的event_base,将监听端口的socket注册到主线程的event_base,由主线程来监听和接受客户端连接。 2、main函数创建主线程的同时,也会创建N个工作线程,每个工作线程都拥有各自的event_base 和LIBEVENT_THREAD数据结构来存储线程的信息(线程基本信息、线程队列、pipe文件描述符)。工作线程会将pipe管道的接收端 fd 注册到自己的event_base。 3、当有新连接建立时,主线程会通过accept 函数来与客户端建立新连接,同时将新连接相关的信息填入CQ_ITEM结构并放入工作线程的conn_queue队列,同时向选定的工作线程的管道写入字符,以此触发工作线程的libevent事件。 4、主线程是通过求余数的方式来选择线程池中的一个工作线程,工作线程得到通知后,会从conn_queue队列中取出CQ_ITEM

NHibernate.Caches.MemCache web.config for cache expiration time

℡╲_俬逩灬. 提交于 2019-12-21 06:28:19
问题 I'm migrating to Nhibernate 2.0 GA but have some trouble with setting cache expirations in memcached provider. I see in the NHibernate.Caches.MemCache sources that there is a property for expiration and a default value for 300 seconds. There are also properties for cache regions but the config section handler does not seem to map them. Is there some other way cache expiration times are set that is not provider specific -- Here is functional web config section (without an expiration settings

Cache consistency when using memcached and a rdbms like MySQL

匆匆过客 提交于 2019-12-21 04:13:17
问题 I have taken a database class this semester and we are studying about maintaining cache consistency between the RDBMS and a cache server such as memcached. The consistency issues arise when there are race conditions. For example: Suppose I do a get(key) from the cache and there is a cache miss. Because I get a cache miss, I fetch the data from the database, and then do a put(key,value) into the cache. But, a race condition might happen, where some other user might delete the data I fetched

How do you work around memcached's key/value limitations?

拜拜、爱过 提交于 2019-12-21 03:31:36
问题 Memcached has length limitations for keys (250?) and values (roughtly 1MB), as well as some (to my knowledge) not very well defined character restrictions for keys. What is the best way to work around those in your opinion? I use the Perl API Cache::Memcached. What I do currently is store a special string for the main key's value if the original value was too big ("parts:<number>") and in that case, I store <number> parts with keys named 1+<main key>, 2+<main key> etc.. This seems "OK" (but

Rails + Dalli memcache gem: DalliError: No server available

六眼飞鱼酱① 提交于 2019-12-21 03:19:20
问题 Hi I'm having trouble setting up my Rails project on my server because apache keeps complaining DalliError: No server available . I installed memcached on my ubuntu machine, but it still doesn't work. My rails project also has config.cache_store = :dalli_store, 'localhost:11211', { :namespace => "production" } in environments/production.rb. How would I debug this? My log shows before each request: localhost:11211 failed (count: 6) DalliError: No server available telnet to 11211: root@s2:/usr