multicore

What is a “spark” in Haskell

◇◆丶佛笑我妖孽 提交于 2019-11-30 00:18:52
I'm confused about the notion of "spark" Is it a thread in Haskell? Or is the action of spawning a new thread ? Thanks everybody: So to summarize, sparks are not thread but more of unit of computation (tasks to put it in C#/Java terms). So it's the Haskell way of implementing the task parallelism. See A Gentle Introduction to Glasgow Parallel Haskell. Parallelism is introduced in GPH by the par combinator, which takes two arguments that are to be evaluated in parallel. The expression p `par` e (here we use Haskell's infix operator notation) has the same value as e , and is not strict in its

Memory Fences - Need help to understand

允我心安 提交于 2019-11-29 23:18:54
I'm reading Memory Barriers by Paul E. McKenney http://www.rdrop.com/users/paulmck/scalability/paper/whymb.2010.07.23a.pdf everything is explained in great details and when I see that everything is clear I encounter one sentence, which stultifies everything and make me think that I understood nothing. Let me show the example void foo(void) { a = 1; #1 b = 1; #2 } void bar(void) { while (b == 0) continue; #3 assert(a == 1); #4 } let's say this two functions are running on a different processors. Now what could possibly happen is store to a #1 could be seen after store to b #2 by the second

Multicore + Hyperthreading - how are threads distributed?

有些话、适合烂在心里 提交于 2019-11-29 21:28:08
I was reading a review of the new Intel Atom 330, where they noted that Task Manager shows 4 cores - two physical cores, plus two more simulated by Hyperthreading. Suppose you have a program with two threads. Suppose also that these are the only threads doing any work on the PC, everything else is idle. What is the probability that the OS will put both threads on the same core? This has huge implications for program throughput. If the answer is anything other than 0%, are there any mitigation strategies other than creating more threads? I expect there will be different answers for Windows,

Multi-Core and Concurrency - Languages, Libraries and Development Techniques [closed]

拜拜、爱过 提交于 2019-11-29 19:20:28
The CPU architecture landscape has changed, multiple cores is a trend that will change how we have to develop software. I've done multi-threaded development in C, C++ and Java, I've done multi-process development using various IPC mechanisms. Traditional approaches of using threads doesn't seem to make it easy, for the developer, to utilize hardware that supports a high degree of concurrency. What languages, libraries and development techniques are you aware of that help alleviate the traditional challenges of creating concurrent applications? I'm obviously thinking of issues like deadlocks

C++ Parallelization Libraries: OpenMP vs. Thread Building Blocks [closed]

非 Y 不嫁゛ 提交于 2019-11-29 19:12:38
I'm going to retrofit my custom graphics engine so that it takes advantage of multicore CPUs. More exactly, I am looking for a library to parallelize loops. It seems to me that both OpenMP and Intel's Thread Building Blocks are very well suited for the job. Also, both are supported by Visual Studio's C++ compiler and most other popular compilers. And both libraries seem quite straight-forward to use. So, which one should I choose? Has anyone tried both libraries and can give me some cons and pros of using either library? Also, what did you choose to work with in the end? Thanks, Adrian I haven

How to use multicore with loops in R

删除回忆录丶 提交于 2019-11-29 18:22:38
I need to speed up this built in loops how can I do please ? for(M_P in 0:9) { for(M_D in 0:(9-M_P)) { for(M_A in 0:(9-M_P-M_D)) { for(M_CC in 0:(9-M_P-M_D-M_A)) { for(M_CD in (9-M_P-M_D-M_A-M_CC)) { for(G_D in 0:9) { for(G_A in 0:(9-G_D)) { for(G_CC in 0:(9-G_D-G_A)) { for(G_CD in (9-G_D-G_A-G_CC)) { for(S_D in 0:9) { for(S_A in 0:(9-S_D)) { for(S_CC in 0:(9-S_D-S_A)) { for(S_CD in (9-S_D-S_A-S_CC)) { for(Q_P in 0:3) { for(Q_D in 0:(3-Q_P)) { for(Q_A in 0:(3-Q_P-Q_D)) { for(Q_CC in 0:(3-Q_P-Q_D-Q_A)) { for(Q_CD in (3-Q_P-Q_D-Q_A-Q_CC)) { It's taking forever to compute how can I do please ? I

Threads vs Cores

耗尽温柔 提交于 2019-11-29 16:56:26
问题 Say if I have a processor like this which says # cores = 4, # threads = 4 and without Hyper-threading support. Does that mean I can run 4 simultaneous program/process (since a core is capable of running only one thread)? Or does that mean I can run 4 x 4 = 16 program/process simultaneously? From my digging, if no Hyper-threading, there will be only 1 thread (process) per core. Correct me if I am wrong. 回答1: That's basically correct, with the obvious qualifier that most operating systems let

Is there a way to improve multicore / multiprocessor performance of the Java compiler?

╄→гoц情女王★ 提交于 2019-11-29 16:30:32
问题 My coworker noticed that when javac is compiling it only utilizes a single core. Is there anything like the -j command with the gcc for Java that will allow us to distribute the compiler workload across cores or processors? If not, do you think that this will ever be possible or is there some sort of fundamental restriction as a result of Java's design? The environment is Linux with the Sun J2SE 1.6.0.11 jdk. 回答1: Although not exactly an answer to your question, some build environments like

How to run code on every CPU

本秂侑毒 提交于 2019-11-29 16:07:56
I am trying to set the Performance Monitor User Mode Enable register on all cpus on a Nexus 4 running a mako kernel. Right now I am setting the registers in a loadable module: void enable_registers(void* info) { unsigned int set = 1; /* enable user-mode access to the performance counter*/ asm volatile ("mcr p15, 0, %0, c9, c14, 0\n\t" : : "r" (set)); } int init_module(void) { online = num_online_cpus(); possible = num_possible_cpus(); present = num_present_cpus(); printk (KERN_INFO "Online Cpus=%d\nPossible Cpus=%d\nPresent Cpus=%d\n", online, possible, present); on_each_cpu(enable_registers ,

Possible sources for random number seeds

半腔热情 提交于 2019-11-29 10:29:01
Two points -- first, the example is in Fortran, but I think it should hold for any language; second, the built in random number generators are not truly random and other generators exist, but we're not interested in using them for what we're doing. Most discussions on random seeds acknowledge that if the program doesn't seed it at run-time, then the seed is generated at compile time. So, the same sequence of numbers is generated every time the program is run, which is not good for random numbers. One way to overcome this is to seed the random number generator with the system clock. However,