sungridengine

Using Conda enviroment in SnakeMake on SGE cluster problem

独自空忆成欢 提交于 2020-01-15 23:02:07
问题 Related: SnakeMake rule with Python script, conda and cluster I have been trying to set up my SnakeMake pipelines to run on SGE clusters (qsub). Using simple commands or tools that are installed directly to computational nodes, there is no problem. However, there is a problem when I try to set up SnakeMake to download tools through Conda on SGE nodes. My testing Snakefile is: rule bwa_sge_c_test: conda: "bwa.yaml" shell: "bwa > snaketest.txt" "bwa.yaml" file is: channels: - bioconda

excluding nodes from qsub command under sge

半城伤御伤魂 提交于 2020-01-12 13:52:47
问题 I have more than 200 jobs I need to submit to and sge cluster. I'll be submitting them into two ques. One of the ques have a machine that I don't want to submit jobs to. How can I exclude that machine? The only thing I found that might be helpful is (assuming three valid nodes available to q1 and all the available nodes for q2 are valid): qsub -q q1.q@n1 q1.q@n2 q1.q@n3 q2.q 回答1: Assuming you don't want to run it on is called n4 then adding the following to your script should work. #$ -l h=

$SGE_TASK_ID not getting set with qsub array grid job

限于喜欢 提交于 2020-01-04 15:16:31
问题 With a very simple zsh script: #!/bin/zsh nums=(1 2 3) num=$nums[$SGE_TASK_ID] $SGE_TASK_ID is the sun-grid engine task id. I'm using qsub to submit an array of jobs. I am following what is advised in the qsub manpage (http://www.clusterresources.com/torquedocs/commands/qsub.shtml#t) and submitting my array job as #script name: job_script.sh qsub job_script.sh -t 1-3 $SGE_TASK_ID is not being set for this array job... does anyone have any ideas why? Thanks! 回答1: Try submitting the job like

SnakeMake rule with Python script, conda and cluster

你。 提交于 2020-01-01 17:56:05
问题 I would like to get snakemake running a Python script with a specific conda environment via a SGE cluster. On the cluster I have miniconda installed in my home directory. My home directory is mounted via NFS so accessible to all cluster nodes. Because miniconda is in my home directory, the conda command is not on the operating system path by default. I.e., to use conda I need to first explicitly add this to the path. I have a conda environment specification as a yaml file, which could be used

SGE unknown resource “nodes”

我是研究僧i 提交于 2019-12-13 08:25:17
问题 I submit a job on SGE with parameter -l like: qsub -pe orte 4 -l nodes=4 run.sh However, the system displays that: Unable to run job: unknown resource "nodes". Could you tell me why and how to solve it? Thank you very much! 回答1: With Sun Grid Engine, the correct resource parameter is h , not nodes : echo 'echo `hostname`' | qsub -l h=<some_hostname> Using this example, you should see the hostname you specified in the standard output file. 回答2: There isn't a nodes resource. Instead you request

SGE Cluster - script fails after submission - works in terminal

谁说我不能喝 提交于 2019-12-12 03:16:35
问题 I have a script that I am trying to submit to a SGE cluster (on Redhat Linux). The very first part of the script defines the current folder from the full CWD path, as a variable to use downstream: #!/usr/bin/bash # #$ -cwd #$ -A username #$ -M user@server #$ -j y #$ -m aes #$ -N test #$ -o test.log.txt echo 'This is a test.' result="${PWD##*/}" echo $result In bash, this works as expected: CWD: -bash-4.1$ pwd /home/user/test Run script: -bash-4.1$ bash test.sh This is a test. test When I

“qsub -now” equivalent using bsub

坚强是说给别人听的谎言 提交于 2019-12-11 16:49:29
问题 In SGE , we have qsub -now yes/no <command> By "-now yes" the job is scheduled immediately(if possible) or not at all . We are not put in pending queue . By "-now no " the job is put in pending queue if it cannot be executed immediately . But in LSF , we have qsub's equivalent as bsub . in bsub, we are put in pending queue, if it cannot be executed immediately. We don't have option as "-now yes" as in qsub . Do we something in bsub as "qsub -now" P.S : One solution is that we can check for

SGE submitted job doesn't run

人盡茶涼 提交于 2019-12-11 12:59:23
问题 I'm using Sun Grid Engine on my ubuntu 14.04 to queue my jobs to be run on my multicore CPU. I've installed and set up SGE on my system but I have problem when testing it. I've created a "hello_world" dir which contains two shell scripts named "hello_world.sh" & "hello_world_qsub.sh" first including a simple command and second including qsub command to submit the first script file as a job to be run. Here's what "hello_world.sh" includes: #!/bin/bash echo "Hello world" > /home/theodore/tmp

Force shell to use python from conda variable in SunGrid engine

早过忘川 提交于 2019-12-11 12:52:11
问题 I'm trying to execute a python file in SunGrid engine, and I'm executing it from my anaconda3 environment variable. my code is simple: from __future__ import print_function import urllib3 import numpy as np if __name__ == '__main__': print('Hellooo') I'm calling it like: qsub -V -b n -cwd -pe mp 3 playground.py but I am getting this error: from: can't read /var/mail/__future__ import: unable to open X server `' @ error/import.c/ImportImageCommand/358. /var/spool/gridengine/execd/cluster-rp-02

Setting SGE for running an executable with different input files on different nodes

北慕城南 提交于 2019-12-11 09:33:08
问题 I used to work with a cluster using SLURM scheduler, but now I am more or less forced to switch to a SGE-based cluster, and I'm trying to get a hang of it. The thing I was working on SLURM system involves running an executable using N input files, and set a SLURM configuration file in this fashion, slurmConf.conf SLURM configuration file 0 /path/to/exec /path/to/input1 1 /path/to/exec /path/to/input2 2 /path/to/exec /path/to/input3 3 /path/to/exec /path/to/input4 4 /path/to/exec /path/to