hpc

R Running foreach dopar loop on HPC MPIcluster

一世执手 提交于 2020-01-14 14:28:07
问题 I got access to an HPC cluster with a MPI partition. My problem is that -no matter what I try- my code (which works fine on my PC) doesn't run on the HPC cluster. The code looks like this: library(tm) library(qdap) library(snow) library(doSNOW) library(foreach) > cl<- makeCluster(30, type="MPI") > registerDoSNOW(cl) > np<-getDoParWorkers() > np > Base = "./Files1a/" > files = list.files(path=Base,pattern="\\.txt"); > > for(i in 1:length(files)){ ...some definitions and variable generation...

How to run binary executables in multi-thread HPC cluster?

匆匆过客 提交于 2020-01-08 02:32:23
问题 I have this tool called cgatools from complete genomics (http://cgatools.sourceforge.net/docs/1.8.0/). I need to run some genome analyses in High-Performance Computing Cluster. I tried to run the job allocating more than 50 cores and 250gb memory, but it only uses one core and limits the memory to less than 2GB. What would be my best option in this case? Is there a way to run binary executables in HPC cluster making it use all the allocated memory? 回答1: The scheduler just runs the binary

How to build Gotoblas2 on Opensuse 12.2

北战南征 提交于 2020-01-06 08:51:15
问题 While building GotoBlas2 on my x86_64 by using the default make file, I encounter the following build error: gcc -O2 -DEXPRECISION -m128bit-long-double -Wall -m64 -DF_INTERFACE_GFORT -fPIC -DSMP_SERVER -DMAX_CPU_NUMBER=8 -DASMNAME= -DASMFNAME=_ -DNAME=_ -DCNAME= -DCHAR_NAME=\"_\" -DCHAR_CNAME=\"\" -I.. -w -o linktest linktest.c ../libgoto2_nehalemp-r1.13.so -L/usr/lib64/gcc/x86_64-suse-linux/4.7 -L/usr/lib64/gcc/x86_64-suse-linux/4.7/../../../../lib64 -L/lib/../lib64 -L/usr/lib/../lib64 -L

How to build Gotoblas2 on Opensuse 12.2

北城余情 提交于 2020-01-06 08:51:08
问题 While building GotoBlas2 on my x86_64 by using the default make file, I encounter the following build error: gcc -O2 -DEXPRECISION -m128bit-long-double -Wall -m64 -DF_INTERFACE_GFORT -fPIC -DSMP_SERVER -DMAX_CPU_NUMBER=8 -DASMNAME= -DASMFNAME=_ -DNAME=_ -DCNAME= -DCHAR_NAME=\"_\" -DCHAR_CNAME=\"\" -I.. -w -o linktest linktest.c ../libgoto2_nehalemp-r1.13.so -L/usr/lib64/gcc/x86_64-suse-linux/4.7 -L/usr/lib64/gcc/x86_64-suse-linux/4.7/../../../../lib64 -L/lib/../lib64 -L/usr/lib/../lib64 -L

C# event subscription limits for single threaded programs

两盒软妹~` 提交于 2020-01-05 18:47:49
问题 I'm attempting to monitor the status of many HPC jobs running in parallel in a single threaded program, I'm subscribing to events raised by OnJobState and when monitoring as few as three jobs event state changes will go missing and the job is stuck running. I'm assuming I need a thread per job to catch all the events but I can't find any information about the limits of events subscripton in a single thread program. I would have thought the .net platform would queue this all up but that doesn

$SGE_TASK_ID not getting set with qsub array grid job

限于喜欢 提交于 2020-01-04 15:16:31
问题 With a very simple zsh script: #!/bin/zsh nums=(1 2 3) num=$nums[$SGE_TASK_ID] $SGE_TASK_ID is the sun-grid engine task id. I'm using qsub to submit an array of jobs. I am following what is advised in the qsub manpage (http://www.clusterresources.com/torquedocs/commands/qsub.shtml#t) and submitting my array job as #script name: job_script.sh qsub job_script.sh -t 1-3 $SGE_TASK_ID is not being set for this array job... does anyone have any ideas why? Thanks! 回答1: Try submitting the job like

run Rmpi on cluster, specify library path

ⅰ亾dé卋堺 提交于 2020-01-03 02:49:13
问题 I'm trying to run an analysis in parallel on our computing cluster. Unfortunately I've had to set up Rmpi myself and may not have done so properly. Because I had to install all necessary packages into my home folder, I always have to call .libPaths('/home/myfolder/Rlib'); before I can load packages. However, it appears that doMPI attempts to load itself, before I can set the library path. .libPaths('/home/myfolder/Rlib'); cat("Step 1") library(doMPI) cl <- startMPIcluster() registerDoMPI(cl)

How to get perf_event results for 2nd Nexus7 with Krait CPU

心不动则不痛 提交于 2020-01-02 18:57:29
问题 all. I try to get PMUs information such as Instructions, Cycle, Cache miss and etc. on 2nd Nexus7 with Krait CPU. The perf tool is not working correctly. Therefore, I am using follow a sample source code in perf_event tutorials. #include <stdlib.h> #include <stdio.h> #include <unistd.h> #include <string.h> #include <sys/ioctl.h> #include <linux/perf_event.h> #include <asm/unistd.h> static long perf_event_open(struct perf_event_attr *hw_event, pid_t pid, int cpu, int group_fd, unsigned long

How to append a sparse domain in Chapel

浪尽此生 提交于 2020-01-02 08:47:51
问题 I'm populating a sparse array in Chapel with a loop that is reading over a CSV. I'm wondering what the best pattern is. var dnsDom = {1..n_dims, 1..n_dims}; var spsDom: sparse subdomain(dnsDom); for line in file_reader.lines() { var i = line[1]:int; var j = line[2]:int; spsDom += (i,j); } Is this an efficient way of doing it? Should I create a temporary array of tuples and append spsDom every ( say ) 10,000 rows? Thanks! 回答1: The way you show in the snippet will expand the internal arrays of

Ensure hybrid MPI / OpenMP runs each OpenMP thread on a different core

我们两清 提交于 2020-01-01 17:11:16
问题 I am trying to get a hybrid OpenMP / MPI job to run so that OpenMP threads are separated by core (only one thread per core). I have seen other answers which use numa-ctl and bash scripts to set environment variables, and I don't want to do this. I would like to be able to do this only by setting OMP_NUM_THREADS and or OMP_PROC_BIND and mpiexec options on the command line. I have tried the following - let's say I want 2 MPI processes that each have 2 OpenMP threads, and each of the threads are