lsf

How to use LSB_JOBINDEX in bsub array job arguments in Platform LSF?

只谈情不闲聊 提交于 2019-12-11 05:29:16
问题 I would like to pass LSB_JOBINDEX to as an argument to my script instead of using an environment variable. This makes my script more LSF agnostic and avoids creating a helper script that uses the environment variable. However, I was not able to use LSB_JOBINDEX in arguments: it only works as part of the initial command string. For example, from a bash shell, I use the test command: bsub -J 'myjobname[1-4]' -o bsub%I.log \ 'echo $LSB_JOBINDEX' \ '$LSB_JOBINDEX' \ \$LSB_JOBINDEX \ '$LSB

Redirect stderr through grep -v in LSF batch job

▼魔方 西西 提交于 2019-12-11 02:12:08
问题 I'm using a library that generates a whole ton of output to stderr (and there is really no way to suppress the output directly in the code; it is ROOT's Minuit2 minimizer which is known for not having a way to suppress the output). I'm running batch jobs through the LSF system, and the error output files are so big that they exceed my disk quota. Erk. When I run locally on a shell, I do: python main.py 2> >( grep -v Minuit2 2>&1 ) to suppress the output, as is done here. This works great, but

How to optimize multithreaded program for use in LSF?

ⅰ亾dé卋堺 提交于 2019-12-06 04:17:44
I am working on a multithreaded number crunching app, let's call it myprogram . I plan to run myprogram on IBM's LSF grid. LSF allows a job to scheduled on CPUs from different machines. For example, bsub -n 3 ... myprogram ... can allocate two CPUs from node1 and one CPU from node2. I know that I can ask LSF to allocate all 3 cores in the same node, but I am interested in the case where my job is scheduled onto different nodes. How does LSF manage this? Will myprogram be run in two different processes in node1 and node2? Does LSF automatically manage data transfer between node1 and node2?

SLURM display the stdout and stderr of an unfinished job

喜夏-厌秋 提交于 2019-12-05 21:01:19
问题 I used to use a server with LSF but now I just transitioned to one with SLURM. What is the equivalent command of bpeek (for LSF) in SLURM? bpeek bpeek Displays the stdout and stderr output of an unfinished job I couldn't find the documentation anywhere. If you have some good references for SLURM, please let me know as well. Thanks! 回答1: You might also want to have a look at the sattach command. 回答2: I just learned that in SLURM there is no need to do bpeek to check the current standard output

LSF (bsub): how to specify a single “wrap-up” job to be run after all others finish?

给你一囗甜甜゛ 提交于 2019-11-29 07:19:24
BASIC PROBLEM: I want to submit N + 1 jobs to an LSF-managed Linux cluster in such a way that the ( N + 1)-st "wrap-up" job is not run until all the preceding N jobs have finished. EXTRA: If possible , it would be ideal if I could arrange matters so that the ( N + 1)-st ("wrap-up") job receives, as its first argument, a value of 0 (say) if all the previous N jobs terminated successfully, and a value different from 0 otherwise. This problem (or at least the part labeled "BASIC PROBLEM") is vastly simpler than what LSF's bsub appears to be designed to handle, so I have a hard time wading through