jobs

Spring batch Retrieve the list of objects from file and return of a single line in output file

牧云@^-^@ 提交于 2019-12-08 08:56:05
问题 I read a CSV file as input using spring batch and i have 2 CSV file as output. The first file contains about 100 lines. the input file contains 5 colones id,typeProduct and price. And i have just 2 type of product i route through all these lines and i write two output files. For both files a single line containing the type of product and the sum of the prices of all these products which have the same type. So my need is before writing in the output files. i want to get all lines in a list to

How run two or more instances of an oracle job in the same time?

断了今生、忘了曾经 提交于 2019-12-08 02:49:45
问题 I have defined a function that when it is called it will define an oracle job with dbms_scheduler.create_job that runs a store_procedure with arguments my Function Begin job created and executed from here end; My problem is that when an instance of my job is executing I can not execute another instance of that job. as I said my jobs executes a store_procedure that have arguments so I want to execute that job with different values of arguments in the same time but I can not. is there any

Accessing grails application config from a quartz job

依然范特西╮ 提交于 2019-12-07 23:40:00
问题 Using grails 2.4.2, quartz:1.0.2, I'm trying to gain access to configuration properties class MyJob { def grailsApplication int propA def MyJob() { propA = grailsApplication.config.foo.bar.propAVal } ... } grailsApplication, however, doesn't get injected, and is null. Can't access any bean from Quartz Job in Grails supposedly relates to this, but I don't really see how the marked answer resolves the OP's question :-/ help? 10x 回答1: The problem is probably that you are accessing

JNA in Windows: auto terminate child processes using Windows Jobs

北战南征 提交于 2019-12-07 23:20:25
问题 I need to launch a child process in my java application in Windows, and eventually my java app can be killed/terminated via task manager. So I need to "link" this child process with the parent process, to both be terminated if the parent process terminate. In windows API we have the CreateJobObject and also the: SetInformationJobObject JOBOBJECT_EXTENDED_LIMIT_INFORMATION structure JOBOBJECT_BASIC_LIMIT_INFORMATION structure And the LimitFlag JOB_OBJECT_LIMIT_KILL_ON_JOB_CLOSE Based on the

springBoot启动时,选择可执行的任务

◇◆丶佛笑我妖孽 提交于 2019-12-07 17:03:39
一次。。有人问我:“boot 启动时 如果想要执行一些任务怎么做?” 我特二的回答。放在spring auto 启动配置项里面 然后在spring容器启动的时候注入。或者使用动态代理。做切面。 虽然上述方式貌似可以执行。但有点复杂。其实boot提供了一种启动后就做的任务操作。 CommandLineRunner 看源码说明为: Spring Batch jobs. Runs all jobs in the surrounding context by default. Can also be used to launch a specific job by providing a jobName。 即,在spring容器启动的时候就开始批处理一些任务。是随spring启动而加载运行的。 使用方式:自定义一个model 实现该及接口并重写run 方法 package org.springboot.sample.runner; import org.springframework.boot.CommandLineRunner; import org.springframework.stereotype.Component; @Component public class MyStartupRunner implements CommandLineRunner { @Override

Submit job with python code (mpi4py) on HPC cluster

↘锁芯ラ 提交于 2019-12-07 10:51:53
问题 I am working a python code with MPI (mpi4py) and I want to implement my code across many nodes (each node has 16 processors) in a queue in a HPC cluster. My code is structured as below: from mpi4py import MPI comm = MPI.COMM_WORLD size = comm.Get_size() rank = comm.Get_rank() count = 0 for i in range(1, size): if rank == i: for j in range(5): res = some_function(some_argument) comm.send(res, dest=0, tag=count) I am able to run this code perfectly fine on the head node of the cluster using the

Submit Spark job on Yarn cluster

Deadly 提交于 2019-12-07 09:08:47
问题 I am struggling for more than 2 days now with the following problem. I wrote a basic "HelloWorld" script in Scala: object Hello extends App{ println("WELCOME TO A FIRST TEST WITH SCALA COMPILED WITH SBT counting fr. 1:15 with sleep 1") val data = 1 to 15 for( a <- data ){ println( "Value of a: " + a ) Thread sleep 1000 } That I then compiled with SBT in order to get a JAR compiled version. I transferred then everything on a cluster (which is Horthonworks sandbox running on a virtual Linux

Monthly jobs on every 30th day using Quartz

假装没事ソ 提交于 2019-12-07 08:42:30
问题 guys, I have monthly jobs scheduled(using Quartz) by users. Users provide starting date f or first job to run, it could be any day of month 1-31 My question is how to schedule this using cron trigger, having in mind that not all month have 31,30,29th days. In such case job should run closest previous day of the month. So, lets say April has only 30 days, so job has to run on 30th of April. Can it be done using single cron trigger? Or should it be combination of triggers? I tried to play with

如何对SAP Leonardo上的机器学习模型进行重新训练

独自空忆成欢 提交于 2019-12-06 22:02:15
Jerry之前的两篇文章介绍了如何通过Restful API的方式,消费SAP Leonardo上预先训练好的机器学习模型: 如何在Web应用里消费SAP Leonardo的机器学习API 部署在SAP Cloud Platform CloudFoundry环境的应用如何消费 当时Jerry提到,Product Image Classification API只支持29种产品类别: 如果我们开发应用时需要支持额外的产品类别,就得需要自行提供该产品类别的图片并重新训练。 下面是SAP Leonardo上机器学习模型的重新训练步骤。 假设我们期望重新训练之后,Product Image Classfication这个模型能够识别出不同种类的花,那么我们首先得搞到大量花的图片。Tensorflow的官网上,已经体贴地给想做模型训练的学习者们,提供了一个做练习用的压缩包,里面包含了大量各式花的图片。 http://download.tensorflow.org/example_images/flower_photos.tgz SAP Leonardo接受的能用于重新训练模型的数据集,必须符合下列的层级结构,即training, validation和test三个文件夹下面,分别包含以产品类别命名的字文件夹,且数据规模之比为8:1:1. 有了用于训练的数据后,下一步就是把这些数据上传到SAP

Android JobScheduler onStartJob called multiple times

a 夏天 提交于 2019-12-06 17:45:23
问题 The JobScheduler calls onStartJob() multiple times, although the job finished. Everything works fine, if I schedule one single job and wait until it has finished. However, if I schedule two or more jobs with different IDs at the same time, then onStartJob() is called again after invoking jobFinished() . For example I schedule job 1 and job 2 with exactly the same parameters except the ID, then the order is: onStartJob() for job 1 and job 2 Both jobs finish, so jobFinished() is invoked for