Specifying SLURM Resources When Executing Multiple Jobs in Parallel

。_饼干妹妹 提交于 2019-12-08 06:26:01

问题


According to the answers here What does the --ntasks or -n tasks does in SLURM? one can run multiple jobs in parallel via ntasks parameter for sbatch followed by srun. To ask a follow up question - how would one specify the amount of memory needed when running jobs in parallel like so?

If say 3 jobs are running in parallel each needing 8G of memory, would one specify 24G of memory in sbatch(i.e. the sum of memory from all jobs) or not give memory parameters in sbatch but instead specify 8G of memory for each srun?


回答1:


You need to specify the memory requirement in the script submitted with sbatch, otherwise you will end up with the default memory allocation, which might not correspond to your needs. If you then specify the 8GB memory in the srun call, you might end up with no jobs being able to start if the default memory is lower than that, or having only one or two jobs running in parallel if the default memory is between 16 and 24GB.

You can request --mem=24GB, but that offer less flexibility than specifying --mem-per-cpu=8G.



来源:https://stackoverflow.com/questions/53952488/specifying-slurm-resources-when-executing-multiple-jobs-in-parallel

标签
易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!