jobs

Running async jobs in dropwizard, and polling their status

跟風遠走 提交于 2019-11-30 20:02:32
问题 In dropwizard, I need to implement asynchronous jobs and poll their status. I have 2 endpoints for this in resource: @Path("/jobs") @Component public class MyController { @POST @Produces(MediaType.APPLICATION_JSON) public String startJob(@Valid MyRequest request) { return 1111; } @GET @Path("/{jobId}") @Produces(MediaType.APPLICATION_JSON) public JobStatus getJobStatus(@PathParam("id") String jobId) { return JobStatus.READY; } } I am considering to use quartz to start job, but only single

Powershell - how to pre-evaluate variables in a scriptblock for Start-Job

喜欢而已 提交于 2019-11-30 17:24:12
I want to use background jobs in Powershell. How to make variables evaluated at the moment of ScriptBlock definition? $v1 = "123" $v2 = "asdf" $sb = { Write-Host "Values are: $v1, $v2" } $job = Start-Job -ScriptBlock $sb $job | Wait-Job | Receive-Job $job | Remove-Job I get printed empty values of $v1 and $v2. How can I have them evaluated in (passed to) the scriptblock and so to the background job? One way is to use the [scriptblock]::create method to create the script block from an expanadable string using local variables: $v1 = "123" $v2 = "asdf" $sb = [scriptblock]::Create("Write-Host

从零开始入门 K8s | 应用编排与管理:Job & DaemonSet

∥☆過路亽.° 提交于 2019-11-30 15:02:34
一、Job 需求来源 Job 背景问题 首先我们来看一下 Job 的需求来源。我们知道 K8s 里面,最小的调度单元是 Pod,我们可以直接通过 Pod 来运行任务进程。这样做将会产生以下几种问题: 我们如何保证 Pod 内进程正确的结束? 如何保证进程运行失败后重试? 如何管理多个任务,且任务之间有依赖关系? 如何并行地运行任务,并管理任务的队列大小? Job:管理任务的控制器 我们来看一下 Kubernetes 的 Job 为我们提供了什么功能: 首先 kubernetes 的 Job 是一个管理任务的控制器,它可以创建一个或多个 Pod 来指定 Pod 的数量,并可以监控它是否成功地运行或终止; 我们可以根据 Pod 的状态来给 Job 设置重置的方式及重试的次数; 我们还可以根据依赖关系,保证上一个任务运行完成之后再运行下一个任务; 同时还可以控制任务的并行度,根据并行度来确保 Pod 运行过程中的并行次数和总体完成大小。 用例解读 我们根据一个实例来看一下Job是如何来完成下面的应用的。 Job 语法 上图是 Job 最简单的一个 yaml 格式,这里主要新引入了一个 kind 叫 Job,这个 Job 其实就是 job-controller 里面的一种类型。 然后 metadata 里面的 name 来指定这个 Job 的名称,下面 spec.template

How to restart scheduled task on runtime with EnableScheduling annotation in spring?

笑着哭i 提交于 2019-11-30 11:43:15
I have been investigating how to change the frequency of a job on runtime with Java 8 and spring. This question was very useful but it did not totally solve my issue. I can now configure the date when to job should be executed next. But If set the delay to 1 year, then I need to wait 1 year before the new configuration in taken into account. My idea would be to stop the scheduled task if the configuration value is changed (so from another class). Then recalculate the next time the task should be executed. Perhaps there is an easier way of doing this. Here is the code I have so far.

Powershell: passing parameters to a job

前提是你 提交于 2019-11-30 05:41:05
问题 I have a script that requires a number of parameters: param ([string]$FOO="foo",[string]$CFG='\ps\bcpCopyCfg.ps1', [string]$CFROM="none", ` [string]$CTO="none", [switch]$HELP=$FALSE, [switch]$FULL=$FALSE, [string]$CCOL="none" ` ,[string]$CDSQUERY="none", [string]$CMSSRV="none" ` ,[string]$CSYBDB="none", [string]$CMSDB="none") when called from the command prompt e.g. powershell .\bcpCopy.ps1 -CFROM earn_n_deduct_actg -CTO fin_earn_n_deduct_actg -CCOL f_edeh_doc_id everything works fine. I need

hadoop only launch local job by default why?

天大地大妈咪最大 提交于 2019-11-30 05:27:15
问题 I have written my own hadoop program and I can run using pseudo distribute mode in my own laptop, however, when I put the program in the cluster which can run example jar of hadoop, it by default launches the local job though I indicate the hdfs file path, below is the output, give suggestions? ./hadoop -jar MyRandomForest_oob_distance.jar hdfs://montana-01:8020/user/randomforest/input/genotype1.txt hdfs://montana-01:8020/user/randomforest/input/phenotype1.txt hdfs://montana-01:8020/user

Cron Job in Laravel [duplicate]

北城以北 提交于 2019-11-30 03:51:59
This question already has an answer here: Cron Job with Laravel 4 3 answers I am trying to develop a cron job for a command I have already created. I am completely new to cron jobs so I dont really know how it works. Trying the command by myself in the console works perfectly. All I need is to be able to execute it every 24 hours. I am using Laravel 4, can anyone help? Thanks! To create a cron job as root, edit your cron file: [sudo] crontab -e Add a new line at the end, every line is a cron job: 25 10 * * * php /var/www/<siteName>/artisan <command:name> <parameters> This will execute the same

Scheduling A Job on AWS EC2

笑着哭i 提交于 2019-11-30 00:30:34
I have a website running on AWS EC2. I need to create a nightly job that generates a sitemap file and uploads the files to the various browsers. I'm looking for a utility on AWS that allows this functionality. I've considered the following: 1) Generate a request to the web server that triggers it to do this task I don't like this approach because it ties up a server thread and uses cpu cycles on the host 2) Create a cron job on the machine the web server is running on to execute this task Again, I don't like this approach because it takes cpu cycles away from the web server 3) Create another

【0916】Linux shell基础知识1

你离开我真会死。 提交于 2019-11-29 20:54:32
【0916】Linux shell基础知识1 8.1 shell介绍 8.2 命令历史 8.3 命令补全和别名 8.4 通配符 8.5 输入输出重定向 8.6 管道符和作业控制 一、shell基础 查看系统是否安装了zsh、ksh 二、修改命令历史格式 1、使用hosiery查看历史使用过的命令,最多储存1000条,由系统内置的环境变量控制。 2 使用history -c清除内存中的命令历史(之前敲过的命令将被清除) 3、只有退出终端时,才会将使用过的命令保存到.bash.history中 4、环境变量HISTSIZE 修改后重启终端,或者使用 source /etc/profile 5、记录命令是何时运行的 将变量重新赋值 HISTTIMEFORMAT=“%Y/%m/%d %H:%M:%S ” 想让环境变量永久生效 使用vim /etc/profile 进入编辑模式 输入/HISTSI查找定位 6、永久保存命令 chattr + a ~/.bash_history 修改文件权限后记录命令历史使用记录的文件只能追加不能删除 7、!!、!n、!word !!:运行history中最后一条命令 !n:运行history中第n行命令 !word:运行history中倒数的第一个以word关键字运行的命令 三、命令补全和别名 1、tab键,补全命令或路径 安装包,安装后重启

how to kill hadoop jobs

末鹿安然 提交于 2019-11-29 19:58:44
I want to kill all my hadoop jobs automatically when my code encounters an unhandled exception. I am wondering what is the best practice to do it? Thanks Depending on the version, do: version <2.3.0 Kill a hadoop job: hadoop job -kill $jobId You can get a list of all jobId's doing: hadoop job -list version >=2.3.0 Kill a hadoop job: yarn application -kill $ApplicationId You can get a list of all ApplicationId's doing: yarn application -list Use of folloing command is depreciated hadoop job -list hadoop job -kill $jobId consider using mapred job -list mapred job -kill $jobId Run list to show