parallel-processing

Control.Parallel compile issue in Haskell

若如初见. 提交于 2020-01-13 07:55:09
问题 The compiler is complaining each time on different example applications of parallel Haskell; with this message: Could not find module `Control.Parallel.Strategies' The ghc compiler command: ghc -threaded -i/sudo/dir/par-modules/3 -cpp -DEVAL_STRATEGIES -eventlog --make parFib.hs Same with simpler ghc -O2 --make -threaded parFib.hs What detail am I overlooking? Am I missing some PATH variable. Imports can look like this: module Main where import System # if defined(EVAL_STRATEGIES) import

Matlab Parallel Computing with Simulink Model

那年仲夏 提交于 2020-01-13 06:31:27
问题 I'm working on a project in which parallel computing would be a huge advantage. The project simulates multiple Simulink models. I did the simulation with a normal for-Loop, but since it takes days to simulate I decided to try the "parfor"-Loop . But that's where the problem begins. First I'll give you pictures of my code, the workspace and the Simulink-part which is causing me problems: Here's my code: apool = gcp('nocreate'); if isempty(apool) apool = parpool('local'); end wpath = pwd;

Has anyone tried to parallelize multiple imputation in 'mice' package?

隐身守侯 提交于 2020-01-13 05:26:29
问题 I'm aware of the fact that Amelia R package provides some support for parallel multiple imputation (MI) . However, preliminary analysis of my study's data revealed that the data is not multivariate normal , so, unfortunately, I can't use Amelia . Consequently, I've switched to using mice R package for MI, as this package can perform MI on data that is not multivariate normal. Since the MI process via mice is very slow (currently I'm using AWS m3.large 2-core instance), I've started wondering

Unix shell script run SQL scripts in parallel

橙三吉。 提交于 2020-01-13 04:47:58
问题 How do I setup this KornShell (ksh) script to run SQL scripts 2,3 in parallel after tst1.sql is done then run tst4.sql after 2,3 are complete? Is this possible? #/usr/bin/ksh #SET ENVIRONMENT ORACLE sqlplus user1/pw @/home/scripts/tst1.sql sqlplus user1/pw @/home/scripts/tst2.sql sqlplus user1/pw @/home/scripts/tst3.sql sqlplus user1/pw @/home/scripts/tst4.sql exit 0 回答1: The first command should run synchronously... If you start the two last commands as background processes (add & at the end

Why does my parallel code generate an error?

爱⌒轻易说出口 提交于 2020-01-13 03:38:31
问题 Issue 1: When sys.stdout.write is not wrapped in a separate function, the code below fails. Issue 2: When ssys.stdout.write is wrapped in a separate function, the code prints spaces between each letter. Code (v1): #!/usr/bin/env python import pp import sys def main(): server = pp.Server() for c in "Hello World!\n": server.submit(sys.stdout.write, (c,), (), ("sys",))() if __name__=="__main__": main() Trace: $ ./parhello.py Traceback (most recent call last): File "./parhello.py", line 15, in

Unexpected Scalability results in Java Fork-Join (Java 8)

廉价感情. 提交于 2020-01-13 03:33:11
问题 Recently, I was running some scalability experiments using Java Fork-Join. Here, I used the non-default ForkJoinPool constructor ForkJoinPool(int parallelism) , passing the desired parallelism (# workers) as constructor argument. Specifically, using the following piece of code: public static void main(String[] args) throws InterruptedException { ForkJoinPool pool = new ForkJoinPool(Integer.parseInt(args[0])); pool.invoke(new ParallelLoopTask()); } static class ParallelLoopTask extends

Why do Scala parallel collections sometimes cause an OutOfMemoryError?

て烟熏妆下的殇ゞ 提交于 2020-01-13 02:13:11
问题 This takes around 1 second (1 to 1000000).map(_+3) While this gives java.lang.OutOfMemoryError: Java heap space (1 to 1000000).par.map(_+3) EDIT: I have standard scala 2.9.2 configuration. I am typing this on scala prompt. And in the bash i can see [ -n "$JAVA_OPTS" ] || JAVA_OPTS="-Xmx256M -Xms32M" AND i dont have JAVA_OPTS set in my env. 1 million integers = 8MB, creating list twice = 16MB 回答1: It seems definitely related to the JVM memory option and to the memory required to stock a

Julia: Using pmap correctly

荒凉一梦 提交于 2020-01-13 02:09:09
问题 Why doesn't this do what I think it should: benjamin@benjamin-VirtualBox:~$ julia -p 3 julia> @everywhere(function foom(bar::Vector{Any}, k::Integer) println(repeat(bar[2],bar[1])); return bar; end) julia> foo={{1,"a"},{2,"b"},{3,"c"}} julia> pmap(foom, foo, 5) From worker 2: a 1-element Array{Any,1}: {1,"a"} and that is all it outputs. I was expecting pmap to iterate through each tuple in foo and call foom on it. EDIT: It works correctly when I don't pass other arguments in: julia>

Always run a constant number of subprocesses in parallel

不问归期 提交于 2020-01-12 19:27:07
问题 I want to use subprocesses to let 20 instances of a written script run parallel. Lets say i have a big list of urls with like 100.000 entries and my program should control that all the time 20 instances of my script are working on that list. I wanted to code it as follows: urllist = [url1, url2, url3, .. , url100000] i=0 while number_of_subproccesses < 20 and i<100000: subprocess.Popen(['python', 'script.py', urllist[i]] i = i+1 My script just writes something into a database or textfile. It

Always run a constant number of subprocesses in parallel

淺唱寂寞╮ 提交于 2020-01-12 19:27:06
问题 I want to use subprocesses to let 20 instances of a written script run parallel. Lets say i have a big list of urls with like 100.000 entries and my program should control that all the time 20 instances of my script are working on that list. I wanted to code it as follows: urllist = [url1, url2, url3, .. , url100000] i=0 while number_of_subproccesses < 20 and i<100000: subprocess.Popen(['python', 'script.py', urllist[i]] i = i+1 My script just writes something into a database or textfile. It