parallel-processing

what's the difference between parallel and multicore programming?

偶尔善良 提交于 2020-01-13 19:01:32
问题 I think the topic says it all. What's the difference, if any, between parallel and multicore programming? Thanks. 回答1: Mutli-core is a kind of parallel programming. In particular, it is a kind of MIMD setup where the processing units aren't distributed, but rather share a common memory area, and can even share data like a MISD setup if need be. I believe it is even disctinct from multi-processing, in that a multi-core setup can share some level of caches, and thus cooperate more efficiently

When using Powershell Jobs, Runspaces, or Workflows, are the threads being executed on separate cores?

廉价感情. 提交于 2020-01-13 18:09:37
问题 When using Powershell Jobs, Runspaces, or Workflows, are the threads being executed on separate cores? (and if so, how do we tell powershell how many cores to use? -- sorry that's 2 questions.) .Net has the Task Parallel Library, which allows a 'for loop' to run in parallel, using all available cores (here is one example). Does Powershell Jobs, Runspaces, or Workflows do something similar? And by similar, I mean are the threads actually running on separate cores, in parallel? I found a

How to parallelize integrating in Mathematica 8

徘徊边缘 提交于 2020-01-13 14:59:56
问题 Somebody have idea how to use all cores for calculating integration? I need to use parallelize or parallel table but how? f[r_] := Sum[(((-1)^n*(2*r - 2*n - 7)!!)/(2^n*n!*(r - 2*n - 1)!))* x^(r - 2*n - 1), {n, 0, r/2}]; Nw := Transpose[Table[f[j], {i, 1}, {j, 5, 200, 1}]]; X1 = Integrate[Nw . Transpose[Nw], {x, -1, 1}]; Y1 = Integrate[D[Nw, {x, 2}] . Transpose[D[Nw, {x, 2}]], {x, -1, 1}]; X1//MatrixForm Y1//MatrixForm 回答1: If one helps Integrate a bit by expanding the matrix elements first,

R: making cluster in doParallel / snowfall hangs

久未见 提交于 2020-01-13 13:07:02
问题 I've got two servers on a LAN with fresh installs of Centos 6.4 minimal and R 3.0.1. Both computers have doParallel, snow, and snowfall packages installed. The servers can ssh to each other fine. When I attempt to make clusters in either direction, I get a prompt for a password, but after entering the password, it just hangs there indefinately. makePSOCKcluster("192.168.1.1",user="username") How can I troubleshoot this? edit: I also tried calling makePSOCKcluster on the above-mentioned

R: making cluster in doParallel / snowfall hangs

风格不统一 提交于 2020-01-13 13:06:11
问题 I've got two servers on a LAN with fresh installs of Centos 6.4 minimal and R 3.0.1. Both computers have doParallel, snow, and snowfall packages installed. The servers can ssh to each other fine. When I attempt to make clusters in either direction, I get a prompt for a password, but after entering the password, it just hangs there indefinately. makePSOCKcluster("192.168.1.1",user="username") How can I troubleshoot this? edit: I also tried calling makePSOCKcluster on the above-mentioned

Vertical and Horizontal Parallelism

放肆的年华 提交于 2020-01-13 11:47:28
问题 Recently working in parallel domain i come to know that there are two terms "vertical parallelism " and "horizontal parallelism". Some people says openmp ( shared memory parallelism ) as vertical while mpi ( distributed memory parallelism ) as horizontal parallelism. Why these terms are called so ? I am not getting the reason. Is it just terminology to call them so ? 回答1: The terms don't seem to be widely used, perhaps because often time a process or system is using both without distinction.

Using Task with Parallel.Foreach in .NET 4.0

人盡茶涼 提交于 2020-01-13 11:07:20
问题 I started off trying to add a progress bar to the windows form that updates the progress of code running within a Parallel.Foreach loop. In order to do this the UI thread has to be available to update the progress bar. I used a Task to run the Parallel.Foreach loop to allow the UI thread to update the progress bar. The work done within the Parallel.Foreach loop is rather intensive. After running the executables of the program(not debugging within visual studio) with the Task, the program

parallel strlen?

a 夏天 提交于 2020-01-13 10:19:08
问题 I'm wondering if there would be any merit in trying to code a strlen function to find the \0 sequence in parallel. If so, what should such a function take into account? Thanks. 回答1: You'd have to make sure the NUL found by a thread is the first NUL in the string, which means that the threads would need to synchronize on what their lowest NUL location is. So while it could be done, the overhead for the sync would be far more expensive than any potential gain from parallelization. Also, there's

How to determine if numba's prange actually works correctly?

て烟熏妆下的殇ゞ 提交于 2020-01-13 09:49:14
问题 In another Q+A (Can I perform dynamic cumsum of rows in pandas?) I made a comment regarding the correctness of using prange about this code (of this answer): from numba import njit, prange @njit def dynamic_cumsum(seq, index, max_value): cumsum = [] running = 0 for i in prange(len(seq)): if running > max_value: cumsum.append([index[i], running]) running = 0 running += seq[i] cumsum.append([index[-1], running]) return cumsum The comment was: I wouldn't recommend parallelizing a loop that isn't

Multi-dimensional nested OpenMP loop

我是研究僧i 提交于 2020-01-13 08:23:22
问题 What is the proper way to parallelize a multi-dimensional embarrassingly parallel loop in OpenMP? The number of dimensions is known at compile-time, but which dimensions will be large is not. Any of them may be one, two, or a million. Surely I don't want N omp parallel 's for an N-dimensional loop... Thoughts: The problem is conceptually simple. Only the outermost 'large' loop needs to be parallelized, but the loop dimensions are unknown at compile-time and may change. Will dynamically