qsub

Using Conda enviroment in SnakeMake on SGE cluster problem

独自空忆成欢 提交于 2020-01-15 23:02:07
问题 Related: SnakeMake rule with Python script, conda and cluster I have been trying to set up my SnakeMake pipelines to run on SGE clusters (qsub). Using simple commands or tools that are installed directly to computational nodes, there is no problem. However, there is a problem when I try to set up SnakeMake to download tools through Conda on SGE nodes. My testing Snakefile is: rule bwa_sge_c_test: conda: "bwa.yaml" shell: "bwa > snaketest.txt" "bwa.yaml" file is: channels: - bioconda

Initialize MPI cluster using Rmpi

坚强是说给别人听的谎言 提交于 2020-01-14 06:22:49
问题 Recently I try to make use of the department cluster to do parallel computing in R . The cluster system is manged by SGE . OpenMPI has been installed and passed the installation test. I submit my query to the cluster via qsub command. In the script, I specify the number of node I want to use via the following command. #PBS -l nodes=2:ppn=24 (two nodes with 24 threads each) Then, mpirun -np 1 R --slave -f test.R I have checked $PBS_NODEFILE afterwards. Two nodes are allocated as I wish. I

excluding nodes from qsub command under sge

半城伤御伤魂 提交于 2020-01-12 13:52:47
问题 I have more than 200 jobs I need to submit to and sge cluster. I'll be submitting them into two ques. One of the ques have a machine that I don't want to submit jobs to. How can I exclude that machine? The only thing I found that might be helpful is (assuming three valid nodes available to q1 and all the available nodes for q2 are valid): qsub -q q1.q@n1 q1.q@n2 q1.q@n3 q2.q 回答1: Assuming you don't want to run it on is called n4 then adding the following to your script should work. #$ -l h=

SGE中将指定的job挂起

时光怂恿深爱的人放手 提交于 2020-01-08 21:42:48
【推荐】2019 Java 开发者跳槽指南.pdf(吐血整理) >>> 在计算的过程中, 可能需要将某些任务暂停计算, 可以使用 qalter 将其 "hold on"。 man qalter -h | -h {u|s|o|n|U|O|S}... Available for qsub (only -h), qrsh, qalter and qresub (hold state is removed when not set explicitly). List of holds to place on a job, a task or some tasks of a job. `u' denotes a user hold. `s' denotes a system hold. `o' denotes a operator hold. `n' denotes no hold (requires manager privileges). As long as any hold other than `n' is assigned to the job the job is not eligible for execution. Holds can be released via qalter and qrls(1). In case of qalter this is

pbs job no output when busy

妖精的绣舞 提交于 2020-01-07 05:30:54
问题 I am experiencing a problem with PBS where, of all the jobs I submit, there tends to be a fraction that do not produce any output as they should. I have to resubmit them several times until they have all produced the output. I have also noticed that this is especially bad when other users submit large numbers of jobs. In this case, ALL of my jobs fail to produce the expected output files. I'm only user of PBS so don't understand what is going on. If anyone can give some suggestions that'd be

how to qsub jobs to the cluster from the parent directory for the subdirectories

余生长醉 提交于 2020-01-06 20:14:11
问题 I have difficulties in submitting my jobs from the parent directory in Linux. Assume that in my parent directory, I do have 1000 sub directories named 1,2,3 ...., 1000 in all of which there is a submission script submit.sh. Rather than going to each subdirectory and qsub individually which of course takes a huge time of mine, I need to qsub all scripts from the parent directory such that all calculations and outputs will be dumped out in the corresponding subdirectoy. is there any way to do

PBS programming

廉价感情. 提交于 2020-01-05 14:06:44
问题 some short and probably stupid questions about PBS: 1- I submit jobs using qsub job_file is it possible to submit a (sub)job inside a job file? 2- I have the following script: qsub job_a qsub job_b For launching job_b, it would be great to have before the results of job_a finished. Is it possible to put some kind of barrier or some otehr workaround so job_b is not launched until job_a finished? Thanks 回答1: Answer to the first question: Typically you're only allowed to submit jobs from the

PBS programming

醉酒当歌 提交于 2020-01-05 14:06:29
问题 some short and probably stupid questions about PBS: 1- I submit jobs using qsub job_file is it possible to submit a (sub)job inside a job file? 2- I have the following script: qsub job_a qsub job_b For launching job_b, it would be great to have before the results of job_a finished. Is it possible to put some kind of barrier or some otehr workaround so job_b is not launched until job_a finished? Thanks 回答1: Answer to the first question: Typically you're only allowed to submit jobs from the

$SGE_TASK_ID not getting set with qsub array grid job

限于喜欢 提交于 2020-01-04 15:16:31
问题 With a very simple zsh script: #!/bin/zsh nums=(1 2 3) num=$nums[$SGE_TASK_ID] $SGE_TASK_ID is the sun-grid engine task id. I'm using qsub to submit an array of jobs. I am following what is advised in the qsub manpage (http://www.clusterresources.com/torquedocs/commands/qsub.shtml#t) and submitting my array job as #script name: job_script.sh qsub job_script.sh -t 1-3 $SGE_TASK_ID is not being set for this array job... does anyone have any ideas why? Thanks! 回答1: Try submitting the job like

Check real time output after qsub a job on cluster

这一生的挚爱 提交于 2020-01-03 03:06:15
问题 Here is my pbs file: #!/bin/bash #PBS -N myJob #PBS -j oe #PBS -k o #PBS -V #PBS -l nodes=hpg6-15:ppn=12 cd ${PBS_O_WORKDIR} ./mycommand On qsub documentation page, it seems like if I put the line PBS -k o , I should be able to check the real time output in a file named myJob.oJOBID in my home dir. However when I check the output by tail -f or cat or more in runtime, it shows nothing in the file. Only when I terminated the job, then the file would show the output. Is there anything I should