pbs

Syntax for submitting a qsub job without an actual job file?

孤者浪人 提交于 2019-12-08 01:32:05
问题 I would like to submit qsub jobs on the fly without creating discrete job files. So, let's say I have a python script called "get_time.py" that simply reports the time. Instead of making a submission script like this: cat>job.sub<<eof #PBS -l walltime=1:00:00 cd $PBS_O_WORKDIR get_time.py eof ...and then submitting the job: qsub job.sub I would like to be able to bypass the file creation step, and I'd image the construct would be something like this: qsub -d . -e get_time.py where -e is my

PBSPro qsub output error file directed to path with jobid in name

房东的猫 提交于 2019-12-07 19:15:29
I'm using PBSPro and am trying to use qsub command line to submit a job but can't seem to get the output and error files to be named how I want them. Currently using: qsub -N ${subjobname_short} \ -o ${path}.o{$PBS_JOBID} -e ${path}.e${PBS_JOBID} ... submission_script.sc Where $path=fulljobname (i.e. more than 15 characters) I'm aware that $PBS_JOBID won't be set until after the job is submitted... Any ideas? Thanks The solution I came up with was following the qsub command with a qalter command like so: jobid=$(qsub -N ${subjobname_short} submission_script.sc) qalter -o ${path}.o{$jobid} -e $

PBS script -o file to multiple locations

别等时光非礼了梦想. 提交于 2019-12-07 07:07:14
问题 Sometimes when I run jobs on a PBS cluster, I'd really like the joblog (-o file) in two places. One in the $PBS_O_WORKDIR for keeping everthing together and one ${HOME}/jobOuts/ for greping/awking/etc... Doing a test from the command line works with tee : echo "hello" | qsub -o `tee $HOME/out1.o $HOME/out2.o $HOME/out3.o` But once I try to put this in my PBS script, it does not work if I put it in a PBS script and qsub ####Parameterized PBS Script #### #PBS -S /bin/bash #PBS -l nodes=1 #PBS

can torque pbs output error messages to file in real time

别来无恙 提交于 2019-12-06 12:00:18
The errors and results are written into *.err(PBS -e) and *.out(PBS -o) files, after the torque pbs jobs are finished. Can torque pbs output ERROR messages to *.err in real time when jobs are running ? Can torque pbs output OUTPUT messages to *.out in real time when jobs are running ? How to config pbs_server or something else? Thanks. The way to do this is to set $spool_as_final_name true in the config file for the mom's. This is located in /mom_priv/config. This is documented here. 来源: https://stackoverflow.com/questions/21251810/can-torque-pbs-output-error-messages-to-file-in-real-time

Syntax for submitting a qsub job without an actual job file?

吃可爱长大的小学妹 提交于 2019-12-06 11:47:20
I would like to submit qsub jobs on the fly without creating discrete job files. So, let's say I have a python script called "get_time.py" that simply reports the time. Instead of making a submission script like this: cat>job.sub<<eof #PBS -l walltime=1:00:00 cd $PBS_O_WORKDIR get_time.py eof ...and then submitting the job: qsub job.sub I would like to be able to bypass the file creation step, and I'd image the construct would be something like this: qsub -d . -e get_time.py where -e is my imaginary parameter that tells qsub that the following is code to be sent to the scheduler, instead of

File can't be found in a small fraction of submitted jobs

。_饼干妹妹 提交于 2019-12-06 07:20:26
I'm trying to run a very large set of batch jobs on a RHEL5 cluster which uses a Lustre file system. I was getting a strange error with roughly 1% of the jobs: they could't find a text file they are all using for steering. A script that reproduces the error looks like this: #!/usr/bin/env bash #PBS -t 1-18792 #PBS -l mem=4gb,walltime=30:00 #PBS -l nodes=1:ppn=1 #PBS -q hep #PBS -o output/fit/out.txt #PBS -e output/fit/error.txt cd $PBS_O_WORKDIR mkdir -p output/fit echo 'submitted from: ' $PBS_O_WORKDIR files=($(ls ./*.txt | sort)) # <-- NOTE THIS LINE cat batch/fits/fit-paths.txt For some

Wait for all jobs of a user to finish before submitting subsequent jobs to a PBS cluster

谁说胖子不能爱 提交于 2019-12-06 02:44:48
问题 I am trying to adjust some bash scripts to make them run on a (pbs) cluster. The individual tasks are performed by several script thats are started by a main script. So far this main scripts starts multiple scripts in background (by appending & ) making them run in parallel on one multi core machine. I want to substitute these calls by qsub s to distribute load accross the cluster nodes. However, some jobs depend on others to be finished before they can start. So far, this was achieved by

PBS script -o file to multiple locations

扶醉桌前 提交于 2019-12-05 16:46:49
Sometimes when I run jobs on a PBS cluster, I'd really like the joblog (-o file) in two places. One in the $PBS_O_WORKDIR for keeping everthing together and one ${HOME}/jobOuts/ for greping/awking/etc... Doing a test from the command line works with tee : echo "hello" | qsub -o `tee $HOME/out1.o $HOME/out2.o $HOME/out3.o` But once I try to put this in my PBS script, it does not work if I put it in a PBS script and qsub ####Parameterized PBS Script #### #PBS -S /bin/bash #PBS -l nodes=1 #PBS -l walltime=0:01:00 #PBS -j oe #PBS -o `tee TEE_TEST.o TEE_TEST.${PBS_JOBID}.o` #PBS -M me@email.com

Loading shared library in open-mpi/ mpi-run

心已入冬 提交于 2019-12-04 07:32:54
I'm trying to run my program using torque scheduler using mpi run. Though in my pbs file I load all the library by export LD_LIBRARY_PATH=/path/to/library yet it gives error i.e. error while loading shared libraries: libarmadillo.so.3: cannot open shared object file: No such file or directory. I guess error lies in variable LD_LIBRARY_PATH not set in all the nodes. How would I make it work? LD_LIBRARY_PATH is not exported automatically to MPI processes, spawned by mpirun . You should use mpirun -x LD_LIBRARY_PATH ... to push the value of LD_LIBRARY_PATH . Also make sure that the specified path

Wait for all jobs of a user to finish before submitting subsequent jobs to a PBS cluster

夙愿已清 提交于 2019-12-04 07:23:10
I am trying to adjust some bash scripts to make them run on a ( pbs ) cluster. The individual tasks are performed by several script thats are started by a main script. So far this main scripts starts multiple scripts in background (by appending & ) making them run in parallel on one multi core machine. I want to substitute these calls by qsub s to distribute load accross the cluster nodes. However, some jobs depend on others to be finished before they can start. So far, this was achieved by wait statements in the main script. But what is the best way to do this using the grid engine? I already