pbs

How to return to bash prompt after printing output from backgrounded function?

≡放荡痞女 提交于 2019-12-18 10:56:15
问题 How can I return to my bash prompt automatically after printing output from a function that was put in the background? For example, when I run the following script in a bash shell: fn(){ sleep 10 echo "Done" exit } fn & After running the script, it immediately returns my prompt. After 10 seconds, it prints "Done" and then displays a blinking cursor on a new line: $ Done ▏ The script isn't running anymore, but I don't get my prompt back until I press Return . Is there any way to force a return

Does a PBS batch system move multiple serial jobs across nodes?

给你一囗甜甜゛ 提交于 2019-12-12 10:56:47
问题 If I need to run many serial programs "in parallel" (because the problem is simple but time consuming - I need to read in many different data sets for the same program), the solution is simple if I only use one node . All I do is keep submitting serial jobs with an ampersand after each command, e.g. in the job script: ./program1 & ./program2 & ./program3 & ./program4 which will naturally run each serial program on a different processor. This works well on a login server or standalone

Naive parallelization in a .pbs file

混江龙づ霸主 提交于 2019-12-11 11:22:50
问题 Is it possible to do parallelize across a for loop in a PBS file? Below is an my attempt.pbs file. I would like to allocate 4 nodes and simultaneously allocate 16 processes per node. I have successfully done this but now I have 4 jobs and I would like to send one job to each node. (I need to do this because queuing algo will make me wait a few days for submitting 4 separate job on the cluster I'm using) #!/bin/bash #PBS -q normal #PBS -l nodes=4:ppn=16:native #PBS -l walltime=10:00:00 #PBS -N

Redirect output of my java program under qsub

杀马特。学长 韩版系。学妹 提交于 2019-12-11 06:05:49
问题 I am currently running multiple Java executable program using qsub. I wrote two scripts: 1) qsub.sh, 2) run.sh qsub.sh #! /bin/bash echo cd `pwd` \; "$@" | qsub run.sh #! /bin/bash for param in 1 2 3 do ./qsub.sh java -jar myProgram.jar -param ${param} done Given the two scripts above, I submit jobs by sh run.sh I want to redirect the messages generated by myProgram.jar -param ${param} So in run.sh , I replaced the 4th line with the following ./qsub.sh java -jar myProgram.jar -param ${param}

Ubuntu + PBS + Apache? How can I show a list of running jobs as a website?

爷,独闯天下 提交于 2019-12-11 05:07:40
问题 Is there a plugin/package to display status information for a PBS queue? I am currently running an apache webserver on the login-node of my PBS cluster. I would like to display status info and have the ability to perform minimal queries without writing it from scratch (or modifying an age old python script, ala jobmonarch). Note, the accepted/bountied solution must work with Ubuntu. Update: In addition to ganglia as noted below, I also looked that the Rocks Cluster Toolkit, but I firmly want

PBS/TORQUE: how do I submit a parallel job on multiple nodes?

泄露秘密 提交于 2019-12-11 03:13:54
问题 So, right now I'm submitting jobs on a cluster with qsub , but they seem to always run on a single node. I currently run them by doing #PBS -l walltime=10 #PBS -l nodes=4:gpus=2 #PBS -r n #PBS -N test range_0_total = $(seq 0 $(expr $total - 1)) for i in $range_0_total do $PATH_TO_JOB_EXEC/job_executable & done wait I would be incredibly grateful if you could tell me if I'm doing something wrong, or if it's just that my test tasks are too small. 回答1: With your approach, you need to have your

how to limit number of concurrently running PBS jobs

放肆的年华 提交于 2019-12-10 12:48:20
问题 I have a 64-node cluster, running PBS Pro. If I submit many hundreds of jobs, I can get 64 running at once. This is great, except when all 64 jobs happen to be nearly I/O bound, and are reading/writing to the same disk. In such cases, I'd like to be able to still submit all the jobs, but have a max of (say) 10 jobs running at a given time. Is there an incantation to qsub that will allow me to do such, without having administrative access to the cluster's PBS server? 回答1: In TORQUE you can do

PBSPro qsub output error file directed to path with jobid in name

狂风中的少年 提交于 2019-12-10 11:35:33
问题 I'm using PBSPro and am trying to use qsub command line to submit a job but can't seem to get the output and error files to be named how I want them. Currently using: qsub -N ${subjobname_short} \ -o ${path}.o{$PBS_JOBID} -e ${path}.e${PBS_JOBID} ... submission_script.sc Where $path=fulljobname (i.e. more than 15 characters) I'm aware that $PBS_JOBID won't be set until after the job is submitted... Any ideas? Thanks 回答1: The solution I came up with was following the qsub command with a qalter

Setup torque/moab cluster to use multiple cores per node with a single loop

允我心安 提交于 2019-12-10 10:46:47
问题 This is a followup on [How to set up doSNOW and SOCK cluster with Torque/MOAB scheduler? I have a memory limited script that only uses 1 foreach loop but I'd like to get 2 iterations running on node1 and 2 iterations running on node2. The above linked question allows you to start a SOCK cluster to each node for the outer loop and then MC cluster for the inner loop and I think doesn't make use of the multiple cores on each node. I get the warning message Warning message: closing unused

can torque pbs output error messages to file in real time

假装没事ソ 提交于 2019-12-08 02:17:14
问题 The errors and results are written into *.err(PBS -e) and *.out(PBS -o) files, after the torque pbs jobs are finished. Can torque pbs output ERROR messages to *.err in real time when jobs are running ? Can torque pbs output OUTPUT messages to *.out in real time when jobs are running ? How to config pbs_server or something else? Thanks. 回答1: The way to do this is to set $spool_as_final_name true in the config file for the mom's. This is located in /mom_priv/config. This is documented here. 来源: