qsub

PBS jobs stay queued ('Q' state) but run with qrun

可紊 提交于 2021-02-10 20:01:35
问题 on my full local torque installation (torque-6.1.1), all my submitted jobs are stuck in 'Q' state, and I have to force their executions using qrun. >qstat -f 141 Job Id: 141.localhost Job_Name = script.pbs Job_Owner = michael@localhost job_state = Q queue = batch server = localhost Checkpoint = u ctime = Wed Aug 23 16:45:25 2017 Error_Path = localhost:/var/spool/torque/script.pbs.e141 Hold_Types = n Join_Path = n Keep_Files = n Mail_Points = bae mtime = Wed Aug 23 16:45:25 2017 Output_Path =

Ensuring one Job Per Node on StarCluster / SunGridEngine (SGE)

可紊 提交于 2021-02-09 02:57:51
问题 When qsub ing jobs on a StarCluster / SGE cluster, is there an easy way to ensure that each node receives at most one job at a time? I am having issues where multiple jobs end up on the same node leading to out of memory (OOM) issues. I tried using -l cpu=8 but I think that does not check the number of USED cores just the number of cores on the box itself. I also tried -l slots=8 but then I get: Unable to run job: "job" denied: use parallel environments instead of requesting slots explicitly.

Ensuring one Job Per Node on StarCluster / SunGridEngine (SGE)

喜欢而已 提交于 2021-02-09 02:49:00
问题 When qsub ing jobs on a StarCluster / SGE cluster, is there an easy way to ensure that each node receives at most one job at a time? I am having issues where multiple jobs end up on the same node leading to out of memory (OOM) issues. I tried using -l cpu=8 but I think that does not check the number of USED cores just the number of cores on the box itself. I also tried -l slots=8 but then I get: Unable to run job: "job" denied: use parallel environments instead of requesting slots explicitly.

Ensuring one Job Per Node on StarCluster / SunGridEngine (SGE)

寵の児 提交于 2021-02-09 02:48:00
问题 When qsub ing jobs on a StarCluster / SGE cluster, is there an easy way to ensure that each node receives at most one job at a time? I am having issues where multiple jobs end up on the same node leading to out of memory (OOM) issues. I tried using -l cpu=8 but I think that does not check the number of USED cores just the number of cores on the box itself. I also tried -l slots=8 but then I get: Unable to run job: "job" denied: use parallel environments instead of requesting slots explicitly.

Running qsub with anaconda environment

廉价感情. 提交于 2021-02-08 15:34:40
问题 I have a program that usually runs inside a conda environmet in Linux, because I use it to manage my libraries, with this instructions: source activate my_environment python hello_world.py How can I run hello_world.py in a high computer that works with PBS. Instructions explains to run adapting the code script.sh , shown below, and calling with the instruction qsub . # script.sh #!/bin/sh #PBS -S /bin/sh #PBS -N job_example #PBS -l select=24 #PBS -j oe cd $PBS_O_WORKDIR mpiexec ./programa_mpi

Does qsub pass command line arguments to my script?

有些话、适合烂在心里 提交于 2021-01-20 19:11:19
问题 When I submit a job using qsub script.sh is $@ setted to some value inside script.sh ? That is, are there any command line arguments passed to script.sh ? 回答1: You can pass arguments to the job script using the -F option of qsub: qsub script.sh -F "args to script" or inside script.sh: #PBS -F arguments This is documented here. 回答2: On my platform the -F is not available. As a substitute -v helped: qsub -v "var=value" script.csh And then use the variable var in your script. See also the

Does qsub pass command line arguments to my script?

。_饼干妹妹 提交于 2021-01-20 19:10:18
问题 When I submit a job using qsub script.sh is $@ setted to some value inside script.sh ? That is, are there any command line arguments passed to script.sh ? 回答1: You can pass arguments to the job script using the -F option of qsub: qsub script.sh -F "args to script" or inside script.sh: #PBS -F arguments This is documented here. 回答2: On my platform the -F is not available. As a substitute -v helped: qsub -v "var=value" script.csh And then use the variable var in your script. See also the

os.system vs subprocess in python on linux

倾然丶 夕夏残阳落幕 提交于 2020-03-18 05:57:06
问题 I have two python scripts. The first script calls a table of second scripts in which I need to execute a third party python script. It looks something like this: # the call from the first script. cmd = "qsub -sync y -b -cwd -V -q long -t 1-10 -tc 5 -N 'script_two' ./script2.py" script2thread = pexpect.spawn(cmd) # end of script 1 So here i am sending 10 jobs out to the queue. In script 2 I have a case statement based on the task_id. In each one I make a similar call to the third party script

os.system vs subprocess in python on linux

青春壹個敷衍的年華 提交于 2020-03-18 05:57:03
问题 I have two python scripts. The first script calls a table of second scripts in which I need to execute a third party python script. It looks something like this: # the call from the first script. cmd = "qsub -sync y -b -cwd -V -q long -t 1-10 -tc 5 -N 'script_two' ./script2.py" script2thread = pexpect.spawn(cmd) # end of script 1 So here i am sending 10 jobs out to the queue. In script 2 I have a case statement based on the task_id. In each one I make a similar call to the third party script

what is 'Gbytes seconds'?

六眼飞鱼酱① 提交于 2020-02-24 04:20:30
问题 From the qstat (Sun Grid Engine) manpage: mem: The current accumulated memory usage of the job in Gbytes seconds. What does that mean? 回答1: I couldn't find better documentation than the man page where that description can be found. I think 1 Gbyte second is 1 Gbyte of memory used for one second. So if your code uses 1 GB for 1 minute then 2 GB for two minutes, the accumulated memory usage is 1*60 + 2*120 = 300 GByte seconds. 回答2: The Gigabyte-second unit specifies the amount of memory