hpc

How to avoid TLB miss (and high Global Memory Replay Overhead) in CUDA GPUs?

余生颓废 提交于 2019-12-05 04:29:57
问题 The title might be more specific than my actual problem is, although I believe answering this question would solve a more general problem, which is: how to decrease the effect of high latency (~700 cycle) that comes from random (but coalesced) global memory access in GPUs. In general if one accesses the global memory with coalesced load (eg. I read 128 consecutive bytes), but with very large distance (256KB-64MB) between coalesced accesses, one gets a high TLB (Translation Lookaside Buffer)

hadoop/yarn and task parallelization on non-hdfs filesystems

我们两清 提交于 2019-12-04 21:14:04
问题 I've instantiated a Hadoop 2.4.1 cluster and I've found that running MapReduce applications will parallelize differently depending on what kind of filesystem the input data is on. Using HDFS, a MapReduce job will spawn enough containers to maximize use of all available memory. For example, a 3-node cluster with 172GB of memory with each map task allocating 2GB, about 86 application containers will be created. On a filesystem that isn't HDFS (like NFS or in my use case, a parallel filesystem),

Tips and tricks on improving Fortran code performance [closed]

不问归期 提交于 2019-12-04 20:09:22
问题 As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance. Closed 8 years ago . As part of my Ph.D. research, I am working on development of numerical models of atmosphere and ocean circulation. These involve

How to use multiple nodes/cores on a cluster with parellelized Python code

大城市里の小女人 提交于 2019-12-04 11:13:09
I have a piece of Python code where I use joblib and multiprocessing to make parts of the code run in parallel. I have no trouble running this on my desktop where I can use Task Manager to see that it uses all four cores and runs the code in parallel. I recently learnt that I have access to a HPC cluster with 100+ 20 core nodes. The cluster uses SLURM as the workload manager. The first question is: Is it possible to run parallelized Python code on a cluster? If it is possible, Does the Python code I have need to be changed at all to run on the cluster, and What #SBATCH instructions need to be

Unable to use all cores with mpirun

北战南征 提交于 2019-12-04 09:07:19
I'm testing a simple MPI program on my desktop (Ubuntu LTS 16.04/ Intel® Core™ i3-6100U CPU @ 2.30GHz × 4/ gcc 4.8.5 /OpenMPI 3.0.0) and mpirun won't let me use all of the cores on my machine (4). When I run: $ mpirun -n 4 ./test2 I get the following error: -------------------------------------------------------------------------- There are not enough slots available in the system to satisfy the 4 slots that were requested by the application: ./test2 Either request fewer slots for your application, or make more slots available for use. ----------------------------------------------------------

How to ask GCC to completely unroll this loop (i.e., peel this loop)?

十年热恋 提交于 2019-12-04 01:00:02
问题 Is there a way to instruct GCC (I'm using 4.8.4) to unroll the while loop in the bottom function completely , i.e., peel this loop? The number of iterations of the loop is known at compilation time: 58. Let me first explain what I have tried. By checking GAS ouput: gcc -fpic -O2 -S GEPDOT.c 12 registers XMM0 - XMM11 are used. If I pass the flag -funroll-loops to gcc: gcc -fpic -O2 -funroll-loops -S GEPDOT.c the loop is only unrolled two times. I checked the GCC optimization options. GCC says

Use slurm job id

我是研究僧i 提交于 2019-12-03 18:39:53
问题 When I launch a computation on the cluster, I usually have a separate program doing the post-processing at the end : sbatch simulation sbatch --dependency=afterok:JOBIDHERE postprocessing I want to avoid mistyping and automatically have the good job id inserted. Any idea? Thanks 回答1: You can do something like this: RES=$(sbatch simulation) && sbatch --dependency=afterok:${RES##* } postprocessing The RES variable will hold the result of the sbatch command, something like Submitted batch job

How to avoid TLB miss (and high Global Memory Replay Overhead) in CUDA GPUs?

筅森魡賤 提交于 2019-12-03 17:33:17
The title might be more specific than my actual problem is, although I believe answering this question would solve a more general problem, which is: how to decrease the effect of high latency (~700 cycle) that comes from random (but coalesced) global memory access in GPUs. In general if one accesses the global memory with coalesced load (eg. I read 128 consecutive bytes), but with very large distance (256KB-64MB) between coalesced accesses, one gets a high TLB (Translation Lookaside Buffer) miss rate. This high TLB miss rate is due to the limited number (~512) and size (~4KB) of the memory

hadoop/yarn and task parallelization on non-hdfs filesystems

我们两清 提交于 2019-12-03 13:50:12
I've instantiated a Hadoop 2.4.1 cluster and I've found that running MapReduce applications will parallelize differently depending on what kind of filesystem the input data is on. Using HDFS, a MapReduce job will spawn enough containers to maximize use of all available memory. For example, a 3-node cluster with 172GB of memory with each map task allocating 2GB, about 86 application containers will be created. On a filesystem that isn't HDFS (like NFS or in my use case, a parallel filesystem), a MapReduce job will only allocate a subset of available tasks (e.g., with the same 3-node cluster,

Tips and tricks on improving Fortran code performance [closed]

不羁岁月 提交于 2019-12-03 13:44:47
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance. As part of my Ph.D. research, I am working on development of numerical models of atmosphere and ocean circulation. These involve numerically solving systems of PDE's on the order of ~10^6 grid points, over ~10^4 time steps. Thus, a typical model simulation