pool

Memory usage keep growing with Python's multiprocessing.pool

こ雲淡風輕ζ 提交于 2019-11-28 16:33:48
Here's the program: #!/usr/bin/python import multiprocessing def dummy_func(r): pass def worker(): pass if __name__ == '__main__': pool = multiprocessing.Pool(processes=16) for index in range(0,100000): pool.apply_async(worker, callback=dummy_func) # clean up pool.close() pool.join() I found memory usage (both VIRT and RES) kept growing up till close()/join(), is there any solution to get rid of this? I tried maxtasksperchild with 2.7 but it didn't help either. I have a more complicated program that calles apply_async() ~6M times, and at ~1.5M point I've already got 6G+ RES, to avoid all other

Node.js and MongoDB, reusing the DB object

一个人想着一个人 提交于 2019-11-28 16:22:08
问题 I'm new to both Node.js and MongoDB, but I've managed to put some parts together from SO and the documentation for mongo. Mongo documentetion gives the example: // Retrieve var MongoClient = require('mongodb').MongoClient; // Connect to the db MongoClient.connect("mongodb://localhost:27017/exampleDb", function(err, db) { if(!err) { console.log("We are connected"); } }); Which looks fine if I only need to use the DB in one function at one place. Searching and reading on SO has shown me that I

C++ object-pool that provides items as smart-pointers that are returned to pool upon deletion

醉酒当歌 提交于 2019-11-28 15:52:08
问题 I'm having fun with c++-ideas, and got a little stuck with this problem. I would like a LIFO class that manages a pool of resources. When a resource is requested (through acquire() ), it returns the object as a unique_ptr that, upon deletion, causes the resource to be returned to the pool. The unit tests would be: // Create the pool, that holds (for simplicity, int objects) SharedPool<int> pool; TS_ASSERT(pool.empty()); // Add an object to the pool, which is now, no longer empty pool.add(std:

Can't pickle static method - Multiprocessing - Python

蓝咒 提交于 2019-11-28 10:10:47
I'm applying some parallelization to my code, in which I use classes. I knew that is not possible to pick a class method without any other approach different of what Python provides. I found a solution here . In my code, I have to parts that should be parallelized, both using class. Here, I'm posting a very simple code just representing the structure of mine (is the same, but I deleted the methods content, which was a lot of math calculus, insignificant for the output that I'm getting). The problem is 'cause I can pickle one method (shepard_interpolation), but with the other one (calculate

Pulling data from a CMSampleBuffer in order to create a deep copy

ⅰ亾dé卋堺 提交于 2019-11-28 09:08:45
I am trying to create a copy of a CMSampleBuffer as returned by captureOutput in a AVCaptureVideoDataOutputSampleBufferDelegate. Since the CMSampleBuffers come from a preallocated pool of (15) buffers, if I attach a reference to them they cannot be recollected. This causes all remaining frames to be dropped. To maintain optimal performance, some sample buffers directly reference pools of memory that may need to be reused by the device system and other capture inputs. This is frequently the case for uncompressed device native capture where memory blocks are copied as little as possible. If

Python Package For Multi-Threaded Spider w/ Proxy Support?

强颜欢笑 提交于 2019-11-28 07:53:57
Instead of just using urllib does anyone know of the most efficient package for fast, multithreaded downloading of URLs that can operate through http proxies? I know of a few such as Twisted, Scrapy, libcurl etc. but I don't know enough about them to make a decision or even if they can use proxies.. Anyone know of the best one for my purposes? Thanks! is's simple to implement this in python. The urlopen() function works transparently with proxies which do not require authentication. In a Unix or Windows environment, set the http_proxy, ftp_proxy or gopher_proxy environment variables to a URL

how does the callback function work in python multiprocessing map_async

被刻印的时光 ゝ 提交于 2019-11-28 05:57:46
It cost me a whole night to debug my code, and I finally found this tricky problem. Please take a look at the code below. from multiprocessing import Pool def myfunc(x): return [i for i in range(x)] pool=Pool() A=[] r = pool.map_async(myfunc, (1,2), callback=A.extend) r.wait() I thought I would get A=[0,0,1] , but the output is A=[[0],[0,1]] . This does not make sense to me because if I have A=[] , A.extend([0]) and A.extend([0,1]) will give me A=[0,0,1] . Probably the callback works in a different way. So my question is how to get A=[0,0,1] instead of [[0],[0,1]] ? Thank you in advance for

How do you determine the size of the nodes created by a 'std::map' for use with 'boost::pool_allocator' (in a cross-platform way)?

老子叫甜甜 提交于 2019-11-28 01:17:48
问题 UPDATE Per comments, answer, and additional research, I have come to the conclusion that there is typically no difference between a set and a map in terms of node overhead. My question that follows is really: How do you determine node overhead for convenient use of boost::pool_allocator as a custom allocator? And, a further update : The node overhead is probably never going to be more than the size of 4 pointers, so just purging the Boost Pool for sizeof(T) , sizeof(T)+sizeof(int) , sizeof(T)

How to use Python multiprocessing Pool.map to fill numpy array in a for loop

佐手、 提交于 2019-11-28 01:02:24
I want to fill a 2D-numpy array within a for loop and fasten the calculation by using multiprocessing. import numpy from multiprocessing import Pool array_2D = numpy.zeros((20,10)) pool = Pool(processes = 4) def fill_array(start_val): return range(start_val,start_val+10) list_start_vals = range(40,60) for line in xrange(20): array_2D[line,:] = pool.map(fill_array,list_start_vals) pool.close() print array_2D The effect of executing it is that Python runs 4 subprocesses and occupies 4 CPU cores BUT the execution doesn´t finish and the array is not printed. If I try to write the array to the disk

Can we access or query the Java String intern (constant) pool?

一世执手 提交于 2019-11-27 22:50:26
Is there are way to access the contents of the String constant pool within our own program? Say I have some basic code that does this: String str1 = "foo"; String str2 = "bar"; There are now 2 strings floating around in our String constant pool. Is there some way to access the pool and print out the above values or get the current total number of elements currently contained in the pool? i.e. StringConstantPool pool = new StringConstantPool(); System.out.println(pool.getSize()); // etc You cannot directly access the String intern pool . As per Javadocs String intern pool is: A pool of strings,