jobs

Difference between Action Job/Mailer's `deliver_now` and `deliver_later`

本小妞迷上赌 提交于 2019-12-06 17:06:10
问题 The common pattern for interfacing with ActionJob in Rails is to set up a Job with a perform() method that gets called asynchronously via perform_now or perform_later In the special case of Mailers, you can directly call deliver_now or deliver_later since ActionJob is well integrated with ActionMailer . The rails documentation has the following comments - # If you want to send the email now use #deliver_now UserMailer.welcome(@user).deliver_now # If you want to send the email through Active

Eclispe does not show progress bar of user threads

家住魔仙堡 提交于 2019-12-06 16:57:04
I created some user jobs on start-up of eclipse, but after launching the workbench I am not able to see the progress bar. Is there anywhere I have to mention these threads other than making them user threads? protected IStatus run(IProgressMonitor monitor) { monitor.beginTask("Download", -1); for (ProxyBean network : ProxyBean.get()) { // do something } monitor.done(); return Status.OK_STATUS; } I initialize it in this way: job = new MyJob(); job.setUser(true); job.schedule();` Check whether you are applying it on correct shell, or the execution time of job is too low so that you can not see

hadoop job tracker cannot start up

限于喜欢 提交于 2019-12-06 12:07:41
问题 Under the Single Node Setup I try to run a single node example The jobtracker start however fails with exception : 2013-04-30 17:12:54,984 INFO org.apache.hadoop.metrics2.impl.MetricsConfig: loaded properties from hadoop-metrics2.properties 2013-04-30 17:12:54,994 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source MetricsSystem,sub=Stats registered. 2013-04-30 17:12:54,995 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot period at 10 second(s

How run two or more instances of an oracle job in the same time?

左心房为你撑大大i 提交于 2019-12-06 11:53:26
I have defined a function that when it is called it will define an oracle job with dbms_scheduler.create_job that runs a store_procedure with arguments my Function Begin job created and executed from here end; My problem is that when an instance of my job is executing I can not execute another instance of that job. as I said my jobs executes a store_procedure that have arguments so I want to execute that job with different values of arguments in the same time but I can not. is there any property or way that can help me to do this? Give the jobs you create random names. 来源: https:/

Accessing grails application config from a quartz job

|▌冷眼眸甩不掉的悲伤 提交于 2019-12-06 07:16:56
Using grails 2.4.2, quartz:1.0.2, I'm trying to gain access to configuration properties class MyJob { def grailsApplication int propA def MyJob() { propA = grailsApplication.config.foo.bar.propAVal } ... } grailsApplication, however, doesn't get injected, and is null. Can't access any bean from Quartz Job in Grails supposedly relates to this, but I don't really see how the marked answer resolves the OP's question :-/ help? 10x The problem is probably that you are accessing grailsApplication in constructor, where it's not injected yet. I recommend to dump useless class property int propA and do

Jenkins SYSTEM user removes custom workspace configuration

大兔子大兔子 提交于 2019-12-06 06:23:52
I have a job NightlyTest-Winx64 configured to use the customWorkspace D:\builds\build-dir\Quick-Winx64-Trunk. Quick-Winx64-Trunk is a job in Jenkins that will checkout the source repository, compile, archive some artifacts and then trigger the NightlyTest-Winx64 job. It triggers NightlyTest-Winx64 to run on the same node using the same workspace so that we're not checking out and compiling twice and only need to run test. On the first run of the NightlyTest-Winx64 the customWorkspace exist and is used as expected. However during this first run the SYSTEM user removes the customWorkspace

Why do I get the information of “suspended (tty input)” when I run my script in the background

梦想的初衷 提交于 2019-12-06 04:28:03
问题 I've written a tcsh script to clear garbage data in a cluster, the code is : set hosts = $1 set clear_path = $2 foreach i ($hosts) rsh $i rm -rvf $clear_path end When I run this script in the background like this : disk_clean.sh hosts_all /u0/data/tmp > log & The job will get stuck and show the information like this: [1] + Suspended (tty input) If I run this in foreground, it can finish normally. Why does this happen? How can I run this script in the background and redirect the output into a

pg从csv文件导入数据到数据库中

戏子无情 提交于 2019-12-05 16:38:47
前置条件 linux环境下安装的pg csv的tar.gz包已经上传到指定路径中(linux),建议表名个文件名一致 所有表和schema已建立 正式开始 1.在csv的tar包所在路径下,解压所有tar包 ls *.tar.gz | xargs -n1 tar xzvf 2.编辑导数脚本,执行脚本 nohup psql -d 数据库名称 -U 用户名称 -c "copy schema.表名 from '文件路径/文件名.csv' " >文件名.log 2>&1 & 如果没有多个数据库的话,可以不用指定数据库,直接使用gpadmin登录后,执行下面脚本 nohup psql -c "copy schema.表名 from '文件路径/文件名.csv' " >文件名.log 2>&1 & 3.查看执行结果,在控制台输入jobs可以查询导数结果: Done 表示导数完成; Running 表示正在进行导数; Exit 表示出错,可以通过查看日志来定位错误原因,解决错误之后,重新导数。 其他 psql使用技巧: 在控制台输入psql,可以进入pg命令行,可以输入sql脚本查询数据; 查询提示schema does not exists,可以在进入psql时指定用户名和数据库。 psql -d 数据库名称 -U 用户名称 来源: https://my.oschina.net/u

Monthly jobs on every 30th day using Quartz

。_饼干妹妹 提交于 2019-12-05 15:14:45
guys, I have monthly jobs scheduled(using Quartz) by users. Users provide starting date f or first job to run, it could be any day of month 1-31 My question is how to schedule this using cron trigger, having in mind that not all month have 31,30,29th days. In such case job should run closest previous day of the month. So, lets say April has only 30 days, so job has to run on 30th of April. Can it be done using single cron trigger? Or should it be combination of triggers? I tried to play with CronExpression to see how it handles such cases: CronExpression ce = new CronExpression("0 0 0 30 JAN

Submit job with python code (mpi4py) on HPC cluster

a 夏天 提交于 2019-12-05 12:11:29
I am working a python code with MPI (mpi4py) and I want to implement my code across many nodes (each node has 16 processors) in a queue in a HPC cluster. My code is structured as below: from mpi4py import MPI comm = MPI.COMM_WORLD size = comm.Get_size() rank = comm.Get_rank() count = 0 for i in range(1, size): if rank == i: for j in range(5): res = some_function(some_argument) comm.send(res, dest=0, tag=count) I am able to run this code perfectly fine on the head node of the cluster using the command $mpirun -np 48 python codename.py Here "code" is the name of the python script and in the