multicore

OpenJDK JVM does not schedule threads on multiple cores

醉酒当歌 提交于 2020-01-02 04:48:07
问题 When I run my multi-threaded Java program on the OpenJDK 6 JVM that is distributed with Ubuntu 12.04, all threads are scheduled on a single core. But when I run the exact same program on the JVM from Oracle's latest 1.7 JDK, it nicely rotates my 20 threads around all 24 available cores. The OpenJDK documentation explains that Java threads will be assigned to native threads, but it doesn't seem to be working. Could there be something configured wrong in my OpenJDK installation, or does it not

How to compile C# for multiple processor machines? (With VS 2010 or csc.exe)

独自空忆成欢 提交于 2020-01-01 14:44:28
问题 Greetings! I've searched for compiler (csc.exe) options at MSDN and I found an answer here, at Stackoverflow, about compiling with multiple processors. But my problem is about compiling for multiple processors, as follows. The university where I'm graduating has a 11 machine cluster (which has 6 quad-cores and 5 four-core bi-processed machines). It runs under linux, but I can install MONO there. And instead of compiling with multiple processors or cores, I want to compile for multiple

Is mclapply guaranteed to return its results in order?

大城市里の小女人 提交于 2020-01-01 08:06:35
问题 I'm working with mclapply from the multicore package (on Ubuntu), and I'm writing a function that required that the results of mclapply(x, f) are returned in order (that is, f(x[1]), f(x[2]), ...., f(x[n]) ). # multicore doesn't work on Windows require(multicore) unlist(mclapply( 1:10, function(x){ Sys.sleep(sample(1:5, size = 1)) identity(x)}, mc.cores = 2)) [1] 1 2 3 4 5 6 7 8 9 10 The above code seems to imply that mclapply returns results in the same order as lapply . However, if this

Is mclapply guaranteed to return its results in order?

点点圈 提交于 2020-01-01 08:06:20
问题 I'm working with mclapply from the multicore package (on Ubuntu), and I'm writing a function that required that the results of mclapply(x, f) are returned in order (that is, f(x[1]), f(x[2]), ...., f(x[n]) ). # multicore doesn't work on Windows require(multicore) unlist(mclapply( 1:10, function(x){ Sys.sleep(sample(1:5, size = 1)) identity(x)}, mc.cores = 2)) [1] 1 2 3 4 5 6 7 8 9 10 The above code seems to imply that mclapply returns results in the same order as lapply . However, if this

What is LLVM and How is replacing Python VM with LLVM increasing speeds 5x?

☆樱花仙子☆ 提交于 2019-12-31 09:11:30
问题 Google is sponsoring an Open Source project to increase the speed of Python by 5x. Unladen-Swallow seems to have a good project plan Why is concurrency such a hard problem? Is LLVM going to solve the concurrency problem? Are there solutions other than Multi-core for Hardware advancement? 回答1: LLVM is several things together - kind of a virtual machine/optimizing compiler, combined with different frontends that take the input in a particular language and output the result in an intermediate

How to use GNU make --max-load on a multicore Linux machine?

偶尔善良 提交于 2019-12-30 04:01:05
问题 From the documentation for GNU make: http://www.gnu.org/software/make/manual/make.html#Parallel When the system is heavily loaded, you will probably want to run fewer jobs than when it is lightly loaded. You can use the ‘-l’ option to tell make to limit the number of jobs to run at once, based on the load average. The ‘-l’ or ‘--max-load’ option is followed by a floating-point number. For example, -l 2.5 will not let make start more than one job if the load average is above 2.5. The ‘-l’

How to make numba @jit use all cpu cores (parallelize numba @jit)

微笑、不失礼 提交于 2019-12-30 00:56:07
问题 I am using numbas @jit decorator for adding two numpy arrays in python. The performance is so high if I use @jit compared with python . However it is not utilizing all CPU cores even if I pass in @numba.jit(nopython = True, parallel = True, nogil = True) . Is there any way to to make use of all CPU cores with numba @jit . Here is my code: import time import numpy as np import numba SIZE = 2147483648 * 6 a = np.full(SIZE, 1, dtype = np.int32) b = np.full(SIZE, 1, dtype = np.int32) c = np

Python multicore programming [duplicate]

主宰稳场 提交于 2019-12-29 04:44:26
问题 This question already has answers here : Threading in Python [closed] (7 answers) Closed 5 years ago . Please consider a class as follow: class Foo: def __init__(self, data): self.data = data def do_task(self): #do something with data In my application I've a list containing several instances of Foo class. The aim is to execute do_task for all Foo objects. A first implementation is simply: #execute tasks of all Foo Object instantiated for f_obj in my_foo_obj_list: f_obj.do_task() I'd like to

How are you taking advantage of Multicore?

旧街凉风 提交于 2019-12-29 02:18:11
问题 As someone in the world of HPC who came from the world of enterprise web development, I'm always curious to see how developers back in the "real world" are taking advantage of parallel computing. This is much more relevant now that all chips are going multicore, and it'll be even more relevant when there are thousands of cores on a chip instead of just a few. My questions are: How does this affect your software roadmap? I'm particularly interested in real stories about how multicore is

How does sched_setaffinity() work?

点点圈 提交于 2019-12-28 12:09:05
问题 I am trying to understand how the linux syscall sched_setaffinity() works. This is a follow-on from my question here. I have this guide, which explains how to use the syscall and has a pretty neat (working!) example. So I downloaded the Linux 2.6.27.19 kernel sources. I did a 'grep' for lines containing that syscall, and I got 91 results. Not promising. Ultimately, I'm trying to understand how the kernel is able to set the instruction pointer for a specific core (or processor.) I am familiar