apc

APC Hits/Misses and configuration

守給你的承諾、 提交于 2019-12-07 04:56:11
问题 What are "Hits & Misses" in reference to APC opcode caching? I've installed APC and it's running great, but I've got "some" misses and I'm wondering if that's "bad". Also, I am running Openx and, as such, am filling up the "Cache full count(s)" pretty quickly. What do I need to change in the configuration to minimize that? Any recommended configurations? 回答1: Some misses are to be expected. Hits = things are in cache Miss = things not (yet) in cache. New or less-used things will always be a

“Call to undefined method” after apache reload using apc opcode cache

若如初见. 提交于 2019-12-06 15:47:52
we are using php5.4, apc 3.1.13 with apache2 and mod_php and have every now and then the problem that apc opcode cache seems to forget cached data after a apache reload (for example after a logrotate). This results sometimes in fatal errors with "Call to undefined method". We do not change the files in any way before the error occurs, and after an apache restart the problem is gone. When we release a new version of our code, we have the next and the current version of our application in the filesystem and switch them via unlink the current and symlink the new version. I think this way apc adds

Multiple Symfony2 Sites using APC Cache

我们两清 提交于 2019-12-06 12:18:05
I have two sites using the Symfony2 framework that have similar files. The sites are both on the same server and use APC cache. I have noticed that some items that are on one site are then used on another site. Any APC that has been used have different and unique keys. Is there some way of splitting the APC cache for each site or set a prefix or something? Have you tried to use ApcUniversalClassLoader() ? According to manual , in your autoload.php you can boot APC with different keys: require __DIR__.'/../vendor/symfony/src/Symfony/Component/ClassLoader/ApcUniversalClassLoader.php'; $loader =

APC User-Cache suitable for high load environments?

試著忘記壹切 提交于 2019-12-06 12:12:41
We try to deploy APC user-cache in a high load environment as local 2nd-tier cache on each server for our central caching service (redis), for caching database queries with rarely changing results, and configuration. We basically looked at what Facebook did (years ago): http://www.slideshare.net/guoqing75/4069180-caching-performance-lessons-from-facebook http://www.slideshare.net/shire/php-tek-2007-apc-facebook It works pretty well for some time, but after some hours under high load, APC runs into problems, so the whole mod_php does not execute any PHP anymore. Even a simple PHP script with

APC Cache fragmentation on WordPress site

女生的网名这么多〃 提交于 2019-12-06 08:44:30
问题 I have recently installed and activated APC cache on a web server (Centos 5.7, PHP 5.3, 1.5Gb RAM) which is primarily dedicated to a medium traffic (30k unique visitors/mo) WordPress site running W3Total Cache which is set to use APC for database and object caching (page, minify use disk). The APC info page for the server shows that there is consistently heavy fragmentation. For example, after restarting httpd, fragmentation is up to 75% after 11 hours, and I have seen it at 100% after a

Weird 500 Internal Server Error (firebug, php, display_errors, ajax)

╄→尐↘猪︶ㄣ 提交于 2019-12-06 06:08:02
On one page I am doing multiple AJAX calls. All calls return responses successfully but the last one (not related to other ajax calls) returns 500 internal server error as response code (as firebug tells). However, in spite of error code, correct content is returned from that AJAX call. To my amazement, when I set display_errors option in php.ini as On, the error disappears and response in rendered on the page. I have setup error logging to a file but no error is logged corresponding to the above mentioned internal server error. By the way, I am using Apache, JQuery, PHP5, APC (if it is

APC and child pid XXXXX exit signal Segmentation fault [closed]

有些话、适合烂在心里 提交于 2019-12-06 05:47:35
问题 Closed. This question is off-topic. It is not currently accepting answers. Want to improve this question? Update the question so it's on-topic for Stack Overflow. Closed 6 years ago . First I had xCache installed on server, I had xCache on lot of my servers but in this one after couple of days you go nothing except a blank page with this error in apache error log: child pid XXXXX exit signal Segmentation fault As far as I know this mean some sort of memory corruption. So I removed xCache from

PHP apc/apcu cache do not keep intermediate result while shmop do, why?

天大地大妈咪最大 提交于 2019-12-06 04:31:38
I've encounter a problem with PHP to store intermediate result locally. With APC : apc_store("foo", "bar"); $ret = apc_fetch("foo"); With APCu : apcu_store("foo", "bar", 0); $ret = apcu_fetch("foo"); I store with apc_store/apcu_store under php_cli on a php script, and fetch with apc_fetch/apcu_fetch on another php script, and find the $ret to be empty. While, with shmop : $shmKey = ftok(__FILE__, 't'); $shmId = shmop_open($shmKey, "c", 0644, 1024); $dataArray = array("foo" => "bar"); shmop_write($shmId, serialize($dataArray), 0); $retArray = unserialize(shmop_read($shmId, 0, shmop_size($shmId)

No performance gain with APC on WampServer

北城余情 提交于 2019-12-06 01:55:16
I'm working on a Windows workstation, on which I use WampServer as my development platform, to write PHP applications which are then run on Linux. I'm pretty used to APC on Linux, which is blazing fast and a must have for me. However, I'm always surprised to get no performance gain when I use it on Windows. This leads to generation times close to 1 second per page, on applications relying heavily on the Zend Framework for example. Most of this time is spent parsing PHP files (I verified that by benchmarking include() s). The very same application can run 10x faster on Linux on MacOS. The

Log visits in shared memory

我只是一个虾纸丫 提交于 2019-12-05 22:40:46
I'm trying to find best way to log visits using PHP. Right now I have about 3000 request per second and I write each visit to CSV file. I was wondering is it faster to log each visit in memory somehow and then dump it to CSV file after 100 000 records? I've checked shmop apc and memcache so far but can't find proper solution. The best way is to use Lua with shared memory to store log entries, then create a timer which checks the size of logged entries every X seconds and uses a co-socket to dump the cache to a file or sql database. Should all be non-blocking. And yes you can pass requests to