pool

Unclosed connection - Connection Pool debugging SQL Server

谁说我不能喝 提交于 2019-12-05 16:42:48
We have a suspect application leaving a connection open. Just wondering on the debugging tools for this, as to whether anyone has any good tools for isolating this, commercial or otherwise. I've Googled but only seem to bring up articles that describe the problem - not the steps for a solution. This is the best article I've seen so far. - Others welcome. Anyone have any products that isolate the problematic code? Profilers which perform this sort of thing, or any other advice to add? You can always check the Activity Monitor on SQL Server to see if the application is keeping the connection

Python's multiprocessing map_async generates error on Windows

|▌冷眼眸甩不掉的悲伤 提交于 2019-12-05 16:35:24
The code below works perfectly on Unix but generates a multiprocessing.TimeoutError on Windows 7 (both OS use python 2.7). Any idea why? Thanks. from multiprocessing import Pool def increment(x): return x + 1 def decrement(x): return x - 1 pool = Pool(processes=2) res1 = pool.map_async(increment, range(10)) res2 = pool.map_async(decrement, range(10)) print res1.get(timeout=1) print res2.get(timeout=1) You need to put your actual program logic in side a if __name__ == '__main__': block. On Unixy systems, Python forks, producing multiple processes to work from. Windows doesn't have fork. Python

How to restrict pool size of MDB on Glassfish v3

微笑、不失礼 提交于 2019-12-05 13:49:54
my Message Driven Bean executes highly intensive operations so I would like to restrict it's pool size or my server would have been overloaded. I have tried this ( code ) but it doesn't work, it's pool is still 32 ( empirically tested, time to time I restart a server so there are no pooled instances ). @MessageDriven( mappedName = "jms/TestTopic", activationConfig = { @ActivationConfigProperty( propertyName = "acknowledgeMode", propertyValue = "Auto-acknowledge" ), @ActivationConfigProperty( propertyName = "destinationType", propertyValue = "javax.jms.Topic" ), @ActivationConfigProperty(

Persistent Processes Post Python Pool

半城伤御伤魂 提交于 2019-12-05 09:03:16
I have a Python program that takes around 10 minutes to execute. So I use Pool from multiprocessing to speed things up: from multiprocessing import Pool p = Pool(processes = 6) # I have an 8 thread processor results = p.map( function, argument_list ) # distributes work over 6 processes! It runs much quicker, just from that. God bless Python! And so I thought that would be it. However I've noticed that each time I do this, the processes and their considerably sized state remain, even when p has gone out of scope; effectively, I've created a memory leak. The processes show up in my System

Why A single Jedis instance is not threadsafe?

无人久伴 提交于 2019-12-05 03:02:16
问题 https://github.com/xetorthio/jedis/wiki/Getting-started using Jedis in a multithreaded environment You shouldn't use the same instance from different threads because you'll have strange errors. And sometimes creating lots of Jedis instances is not good enough because it means lots of sockets and connections, which leads to strange errors as well. A single Jedis instance is not threadsafe ! To avoid these problems, you should use JedisPool, which is a threadsafe pool of network connections.

Gevent pool with nested web requests

廉价感情. 提交于 2019-12-05 01:42:41
问题 I try to organize pool with maximum 10 concurrent downloads. The function should download base url, then parser all urls on this page and download each of them, but OVERALL number of concurrent downloads should not exceed 10. from lxml import etree import gevent from gevent import monkey, pool import requests monkey.patch_all() urls = [ 'http://www.google.com', 'http://www.yandex.ru', 'http://www.python.org', 'http://stackoverflow.com', # ... another 100 urls ] LINKS_ON_PAGE=[] POOL = pool

python no output when using pool.map_async

邮差的信 提交于 2019-12-04 19:36:41
I am experiencing very strange issues while working with the data inside my function that gets called by pool.map. For example, the following code works as expected... import csv import multiprocessing import itertools from collections import deque cur_best = 0 d_sol = deque(maxlen=9) d_names = deque(maxlen=9) **import CSV Data1** def calculate(vals): #global cur_best sol = sum(int(x[2]) for x in vals) names = [x[0] for x in vals] print(", ".join(names) + " = " + str(sol)) def process(): pool = multiprocessing.Pool(processes=4) prod = itertools.product(([x[2], x[4], x[10]] for x in Data1))

Use different sprites texture in one generic pool AndEngine

最后都变了- 提交于 2019-12-04 19:26:46
I want to use at least 9 image & they will be use via pool. But i can use only one texture for a Pool Class & can't use rest other. My code: Like: public class BubblePool extends GenericPool<Bubble> { public static BubblePool instance; private PixelPerfectTiledTextureRegion aITiledTextureRegion; public BubblePool(PixelPerfectTiledTextureRegion aTextureRegion) { if (aTextureRegion == null) { throw new IllegalArgumentException( "The Texture Region must not be null"); } this.aITiledTextureRegion = aTextureRegion.deepCopy(); instance = this; } public static BubblePool sharedBubblePool() { // if

python multiprocessing, manager initiates process spawn loop

元气小坏坏 提交于 2019-12-04 15:27:23
I have a simple python multiprocessing script that sets up a pool of workers that attempt to append work-output to a Manager list. The script has 3 call stacks: - main calls f1 that spawns several worker processes that call another function g1. When one attempts to debug the script (incidentally on Windows 7/64 bit/VS 2010/PyTools) the script runs into a nested process creation loop, spawning an endless number of processes. Can anyone determine why? I'm sure I am missing something very simple. Here's the problematic code: - import multiprocessing import logging manager = multiprocessing

What simple mechanism for synchronous Unix pooled processes?

半城伤御伤魂 提交于 2019-12-04 13:36:41
问题 I need to limit the number of processes being executed in parallel. For instance I'd like to execute this psuedo-command line: export POOL_PARALLELISM=4 for i in `seq 100` ; do pool foo -bar & done pool foo -bar # would not complete until the first 100 finished. Therefor despite 101 foo s being queued up to run, only 4 would be running at any given time. pool would fork()/exit() and queue the remaining processes until complete. Is there a simple mechanism to do this with Unix tools? at and