sbatch

SLURM sbatch job array for the same script but with different input string arguments run in parallel

和自甴很熟 提交于 2020-07-05 12:34:51
问题 My question is similar with this one, and the difference is that my different arguments are not numbers but strings. If I have a script (myscript.R) that takes two strings as arguments: "text-a", "text-A". My shell script for sbatch would be: #!/bin/bash #SBATCH -n 1 #SBATCH -c 12 #SBATCH -t 120:00:00 #SBATCH --partition=main #SBATCH --export=ALL srun ./myscript.R "text-a" "text-A" Now I have a few different input strings that I'd like to run with: first <- c("text-a","text-b","text-c","text

SLURM sbatch job array for the same script but with different input string arguments run in parallel

假如想象 提交于 2020-07-05 12:33:32
问题 My question is similar with this one, and the difference is that my different arguments are not numbers but strings. If I have a script (myscript.R) that takes two strings as arguments: "text-a", "text-A". My shell script for sbatch would be: #!/bin/bash #SBATCH -n 1 #SBATCH -c 12 #SBATCH -t 120:00:00 #SBATCH --partition=main #SBATCH --export=ALL srun ./myscript.R "text-a" "text-A" Now I have a few different input strings that I'd like to run with: first <- c("text-a","text-b","text-c","text

How to let SBATCH send stdout via email?

天涯浪子 提交于 2020-02-24 05:29:58
问题 I would like to let the slurm system send myprogram output via email when the computing is done. So I wrote the SBATCH as following #!/bin/bash -l #SBATCH -J MyModel #SBATCH -n 1 # Number of cores #SBATCH -t 1-00:00 # Runtime in D-HH:MM #SBATCH -o JOB%j.out # File to which STDOUT will be written #SBATCH -e JOB%j.err # File to which STDERR will be written #SBATCH --mail-type=END #SBATCH --mail-user=my@email.com echo $SLURM_JOB_ID echo $SLURM_JOB_NAME /usr/bin/mpirun -np 1 ./myprogram /usr/bin

slurm: How to submit a job under another user and prevent to read other users' files?

試著忘記壹切 提交于 2020-01-23 09:53:05
问题 Based on following thread; I am trying to send a job under another user. I am logged in as the main_user , and slurm jobs are submit via main_user that can do rm -rf /home/main_user that is pretty dangerous. In order to prevent this I want to run a job under another user's permission under the main_user 's directory. I think that if I am able managed to submit the job through newly created user , that user has no permission to alter into any of my files, expect the folder that the user is

slurm: How to submit a job under another user and prevent to read other users' files?

别来无恙 提交于 2020-01-23 09:52:07
问题 Based on following thread; I am trying to send a job under another user. I am logged in as the main_user , and slurm jobs are submit via main_user that can do rm -rf /home/main_user that is pretty dangerous. In order to prevent this I want to run a job under another user's permission under the main_user 's directory. I think that if I am able managed to submit the job through newly created user , that user has no permission to alter into any of my files, expect the folder that the user is

What happens if I am running more subjobs than the number of core allocated

*爱你&永不变心* 提交于 2019-12-25 14:26:58
问题 So I have a sbatch (slurm job scheduler) script in which I am processing a lot of data through 3 scripts: foo1.sh, foo2.sh and foo3.sh. foo1.sh and foo2.sh are independent and I want to run them simultaneously. foo3.sh needs the outputs of foo1.sh and foo2.sh so I am building a dependency. And then I have to repeat it 30 times. Let say: ## Resources config #SBATCH --ntasks=30 #SBATCH --task-per-core=1 for i in {1..30}; do srun -n 1 --jobid=foo1_$i ./foo1.sh & srun -n 1 --jobid=foo2_$i ./foo2

What happens if I am running more subjobs than the number of core allocated

三世轮回 提交于 2019-12-25 14:23:25
问题 So I have a sbatch (slurm job scheduler) script in which I am processing a lot of data through 3 scripts: foo1.sh, foo2.sh and foo3.sh. foo1.sh and foo2.sh are independent and I want to run them simultaneously. foo3.sh needs the outputs of foo1.sh and foo2.sh so I am building a dependency. And then I have to repeat it 30 times. Let say: ## Resources config #SBATCH --ntasks=30 #SBATCH --task-per-core=1 for i in {1..30}; do srun -n 1 --jobid=foo1_$i ./foo1.sh & srun -n 1 --jobid=foo2_$i ./foo2

starting slurm array job with a specified number of nodes

北战南征 提交于 2019-12-25 04:12:12
问题 I’m trying to align 168 sequence files on our HPC using slurm version 14.03.0. I’m only allowed to use a maximum of 9 compute nodes at once to keep some nodes open for other people. I changed the file names so I could use the array function in sbatch. The sequence files look like this: Sequence1.fastq.gz, Sequence2.fastq.gz, … Sequence168.fastq.gz I can’t seem to figure out how to tell it to run all 168 files, 9 at a time. I can get it to run all 168 files, but it uses all the available nodes

SLURM Submit multiple tasks per node?

房东的猫 提交于 2019-12-22 06:16:15
问题 I found some very similar questions which helped me arrive at a script which seems to work however I'm still unsure if I fully understand why, hence this question.. My problem (example): On 3 nodes, I want to run 12 tasks on each node (so 36 tasks in total). Also each task uses OpenMP and should use 2 CPUs. In my case a node has 24 CPUs and 64GB memory. My script would be: #SBATCH --nodes=3 #SBATCH --ntasks=36 #SBATCH --cpus-per-task=2 #SBATCH --mem-per-cpu=2000 export OMP_NUM_THREADS=2 for i

How does one make sure that the python submission script in slurm is in the location from where the sbatch command was given?

可紊 提交于 2019-12-12 18:45:09
问题 I have a python submission script that I run with sbatch using slurm : sbatch batch.py when I do this things do not work properly because I assume, the batch.py process does not inherit the right environment variables. Thus instead of running batch.py from where the sbatch command was done, its ran from somewhere else ( / I believe). I have managed to fix this by doing wrapping the python script with a bash script: #!/usr/bin/env bash cd path/to/scripts python script.py this temporary hack