How to determine at which point in python script step memory exceeded in SLURM
问题 I have a python script that I am running on a SLURM cluster for multiple input files: #!/bin/bash #SBATCH -p standard #SBATCH -A overall #SBATCH --time=12:00:00 #SBATCH --output=normalize_%A.out #SBATCH --error=normalize_%A.err #SBATCH --nodes=1 #SBATCH --ntasks=1 #SBATCH --cpus-per-task=20 #SBATCH --mem=240000 HDF5_DIR=... OUTPUT_DIR=... NORM_SCRIPT=... norm_func () { local file=$1 echo "$file" python $NORM_SCRIPT -data $file -path $OUTPUT_DIR } # Doing normalization in parallel for file in