parallel-processing

Deadlock in Parallel.Foreach while using ExecuteNonQuery?

梦想的初衷 提交于 2020-05-17 03:01:34
问题 I am facing deadlock error while using Parallel.Foreach. I have 1000 records in datatable and i'hv created 5 threads to process it. but when i'hv run this console application then after some records processed it will create a deadlock and no other records will process. Here is my code : Parallel.ForEach(dt1.AsEnumerable(), new ParallelOptions { MaxDegreeOfParallelism = 5 }, dr => { cmd1.CommandText = $"Update AuditMessage set Status=1" + $" where SXAEASCoreAuditMessageID ='{Convert.ToString

Keep threads alive even if console app exits

筅森魡賤 提交于 2020-05-15 10:36:24
问题 I have a console app in C# that runs endlessly and looks something like this: class Program { static void Main(string[] args) { while(true) { var listOfIds = GetItemIds(); Parallel.ForEach(listOfIds, new Parallel { MaxDegreeOfParallelism = 15}, th => DoWork(th)); Console.WriteLine("Iteration Complete"); } } static void DoWork(int id) { //do some work and save data } static List<int> GetItemIds() { //return a list of ints } } While a thread is in DoWork , it is processing data, modifying it

Keep threads alive even if console app exits

断了今生、忘了曾经 提交于 2020-05-15 10:36:11
问题 I have a console app in C# that runs endlessly and looks something like this: class Program { static void Main(string[] args) { while(true) { var listOfIds = GetItemIds(); Parallel.ForEach(listOfIds, new Parallel { MaxDegreeOfParallelism = 15}, th => DoWork(th)); Console.WriteLine("Iteration Complete"); } } static void DoWork(int id) { //do some work and save data } static List<int> GetItemIds() { //return a list of ints } } While a thread is in DoWork , it is processing data, modifying it

How to poll from a queue 1 message at a time after downstream flow is completed in Spring Integration

风格不统一 提交于 2020-05-15 09:35:08
问题 I am currently working on improving performance in an integration flow trying to parallelize message processing. I have implemented all using Java DSL. The current Integration flow takes messages from a Queue Channel with a fixed Poller and process the message serially through multiple handlers one after the other until it reaches a final handler that makes some final calculations taking into account each of the previous handler's output. They are all wired up within the same Integration Flow

executing multiple models in tensorflow with a single session

前提是你 提交于 2020-05-15 08:56:33
问题 I'm trying to run several models of neural networks in tensorflow in parallel, each model is independent of the rest. It is necessary to create a session for each of the executions I launch with tensorflow or I could reuse the same session for each of the models ?. Thank you 回答1: A session is linked to a specific Tensorflow Graph instance. If you want to have one session for all, you need to put all your models in the same graph. This may cause you naming problems for tensors and is IMO

Multiprocessing with dictionary of generator objects, TypeError: cannot pickle 'generator' object

混江龙づ霸主 提交于 2020-05-09 10:05:27
问题 How can I use multiprocessing to create a dictionary with generator objects as values? Here is my problem in greater detail, using basic examples: I have a large dictionary of lists whereby I am applying functions to compute on the dictionary values using ProcessPoolExecutor in concurrent.futures . (Note I am using ProcessPoolExecutor , not threads---there is no GIL contention here.) Here is an example dictionary of lists: example_dict1 = {'key1':[367, 30, 847, 482, 887, 654, 347, 504, 413,

Parallelizing recursion in a for-loop using readdir

二次信任 提交于 2020-05-08 19:47:17
问题 I'd like to parallelize a C-program which recursively calculates the size of a directory and its sub-directories, using OpenMP and C. My issue is, that when I get into a directory using opendir , and I iterate through the sub-directories using readdir I can only access them one by one until I've reached the last sub-directory. It all works well sequentially. When parallelizing the program, however, I think it would make sense to split the number of sub-directories in half (or even smaller

Parallelization with Multiple Cores per Worker

人走茶凉 提交于 2020-04-30 10:36:35
问题 Some R packages have functions that can do their work in parallel if multiple cores are available - for example, the rstan package can run multiple MCMC chains in parallel. When I run a number of Stan processes in parallel to each other using, e.g., doSNOW and foreach , I'd like my code to operate in parallel at both levels*. Instead, the Stan processes get farmed out to my workers and seem to run their chains in sequence there, as if once they've been assigned to a core they can't see the

MPI process synchronization

我的梦境 提交于 2020-04-30 10:25:48
问题 I'm still confused about the implementation of my program using MPI. This is my example: import mpi.*; public class HelloWorld { static int me; static Object [] o = new Object[1]; public static void main(String args[]) throws Exception { //10 processes were started: -np 10 MPI.Init(args); me = MPI.COMM_WORLD.Rank(); if(me == 0) { o[0] = generateRandBoolean(0.5); for(int i=1; i<10;i++) MPI.COMM_WORLD.Isend(o, 0, 1, MPI.OBJECT, i,0); if((Boolean)o[0]) MPI.COMM_WORLD.Barrier(); } else { (new

Is this proper use of numpy seeding for parallel code?

天涯浪子 提交于 2020-04-30 10:02:55
问题 I am running n instances of the same code in parallel and want each instance to use independent random numbers. For this purpose, before I start the parallel computations I create a list of random states, like this: import numpy.random as rand rand_states = [(rand.seed(rand.randint(2**32-1)),rand.get_state())[1] for j in range(n)] I then pass one element of rand_states to each parallel process, in which I basically do rand.set_state(rand_state) data = rand.rand(10,10) To make things