hpc

Running NetLogo on HPC machine: how to specify the number of cores to be used?

我的未来我决定 提交于 2019-12-13 02:23:46
问题 $ wget https://ccl.northwestern.edu/netlogo/5.1.0/netlogo-5.1.0.tar.gz $ tar -xzf netlogo-5.1.0.tar.gz $ ~/netlogo-5.1.0/netlogo-headless.sh \ --model ~/myproject/MyModel.nlogo \ --experiment MyExperiment \ --table ~/myproject/MyNewOutputData.csv Using the above commands to run a netlogo headless on HPC machine. The problem is how to I specify the number of cores to be used or does by default take the maximum avialable? 回答1: A look at http://ccl.northwestern.edu/netlogo/5.1.0/docs

Referencing job index in LSF job array

心已入冬 提交于 2019-12-12 18:19:12
问题 I'm trying to pass the index of a job in a job array as a parameter to another bash script. numSims=3 numTreatments=6 # uses numTreatments top rows of parameters.csv maxFail=10 j=1 while [ $j -le $numSims ]; do bsub -q someQueue -J "mySim[1-$numTreatments]%2" ./another_script.sh $LSB_JOBINDEX $j $maxFail let j=j+1 done The ultimate idea here is to submit, for each of 1,..., numTreatments , numSims jobs (simulations). I'd like two jobs running at a time ( %2 ). Outputs have the form XX

Probe for MPI_Bcast or MPI_Send

时光毁灭记忆、已成空白 提交于 2019-12-12 18:02:02
问题 I have a program where there is a master/slave setup, and I have some functions implemented for the master which sends different kinds of data to the slaves. Some functions send to individual slaves, but some broadcast information to all the slaves via MPI_Bcast. I want to have only one receive function in the slaves, so I want to know if I can probe for a message and know if it was broadcasted or sent as a normal blocking message, since there are different method to receive what was

Log files in massively distributed systems

旧巷老猫 提交于 2019-12-12 10:33:39
问题 I do a lot of work in the grid and HPC space and one of the biggest challenges we have with a system distributed across hundreds (or in some case thousands) of servers is analysing the log files. Currently log files are written locally to the disk on each blade but we could also consider publishing logging information using for example a UDP Appender and collect it centally. Given that the objective is to be able to identify problems in as close to real time as possible, what should we do?

Error building a C/C++ application with COMPSs: Hardcoded path

倖福魔咒の 提交于 2019-12-12 09:46:56
问题 I am trying to build a COMPSs application developed with the C/C++ binding. When I am building the application, I got the following error. Do you have an idea about how can I solve this issue? xxxx:~/xxx/c/increment> buildapp increment *---------------------------------------------------------------------* * * * BSC - Barcelona Supercomputing Center * * COMP Superscalar * * * * C/C++ Applications - BUILD SCRIPT * * * * * * More information at COMP Superscalar Website: http://compss.bsc.es * *

What is better practise in high-performance computing: passing a struct of data into a function or a set of variables?

三世轮回 提交于 2019-12-12 05:28:16
问题 Imagine that I have a struct that contains a set of variables that describe an object, which is a grid in my case. I was wondering, if I have a function that only uses a subset of the grid, whether there are any performance differences in the two variants of the computational_kernel functions below. The kernels are the same, except that the one in which the struct is passed has to extract the itot , jtot and ktot from the struct before heavy computations are done. struct Grid { int itot; int

Force load R packages while running the job in cluster

拈花ヽ惹草 提交于 2019-12-12 05:27:34
问题 When I run a job in HPC cluster in interactive mode, I can load the packages and if it fails (not sure why some packages fail to load at first instance) to load, I can load it by running the library (failed package) multiple times, but when I do qsub my_rscript_job.pbs , the packages fail to load. my my_rscript_job.pbs script is: #!/bin/bash #PBS -l walltime=100:00:00 #PBS -l ncpus=1,mem=100g source ~/.bashrc Rscript /dmf/mypath/map.r -t 100 The packages I need to load in the map.r script are

Does the number of processes in MPI have a limit?

淺唱寂寞╮ 提交于 2019-12-12 02:57:34
问题 I am reading "Using MPI" and try to execute the code myself. There is a grid decomposition code in Chapter 6.3. It compiles with no warnings or errors, and runs with small number processes, but fails with larger numbers, say 30, on my laptop. My laptop is 4 core, hyperthreaded, and 8G RAM. Both versions of la_grid_2d_new do not work, but the first one tolerate a little larger number, say 35, but fails for 40 processes. I am not sure why. Could you help me please? Thanks a lot. #include <stdio

is it possible to a vCPU to use different CPUs from two different hardware computers

大兔子大兔子 提交于 2019-12-12 02:13:21
问题 I'v searched about this but i don't seem to get fair answer. lets say i wan't to create a vm that has a vCPU, and that vCPU must have 10 cores but i only have 2 computers with 5 cores of physical CPU for each. is it possible to create one vCPU by relaying on these two physical CPUs to perform like regular one physical CPU? Update 1: lets say i'm using virtualBox, and the term vCPU is referring to virtual cpu, and it's a well known term. Update 2: i'm asking this because i'm doing a little

Limits with MPI_Send or MPI_Recv?

谁说我不能喝 提交于 2019-12-12 01:13:29
问题 Do we have any limits about message size on MPI_Send or MPI_Recv - or limits by computer? When I try to send large data, it can not completed. This is my code: #include <stdio.h> #include <stdlib.h> #include <mpi.h> #include <math.h> #include <string.h> void AllGather_ring(void* data, int count, MPI_Datatype datatype,MPI_Comm communicator) { int me; MPI_Comm_rank(communicator, &me); int world_size; MPI_Comm_size(communicator, &world_size); int next=me+1; if(next>=world_size) next=0; int prev