jobs

Retrieve result from 'task_id' in Celery from unknown task

我与影子孤独终老i 提交于 2020-01-01 09:08:39
问题 How do I pull the result of a task if I do not know previously which task was performed? Here's the setup: Given the following source('tasks.py'): from celery import Celery app = Celery('tasks', backend="db+mysql://u:p@localhost/db", broker = 'amqp://guest:guest@localhost:5672//') @app.task def add(x,y): return x + y @app.task def mul(x,y): return x * y with RabbitMQ 3.3.2 running locally: marcs-mbp:sbin marcstreeter$ ./rabbitmq-server RabbitMQ 3.3.2. Copyright (C) 2007-2014 GoPivotal, Inc. #

Wait for kubernetes job to complete on either failure/success using command line

岁酱吖の 提交于 2020-01-01 08:52:50
问题 What is the best way to wait for kubernetes job to be complete? I noticed a lot of suggestions to use: kubectl wait --for=condition=complete job/myjob but i think that only works if the job is successful. if it fails, i have to do something like: kubectl wait --for=condition=failure job/myjob is there a way to wait for both conditions using wait? if not, what is the best way to wait for a job to either succeed or fail? 回答1: kubectl wait --for=condition=<condition name is waiting for a

从零开始入门 K8s | 应用编排与管理:Job & DaemonSet

那年仲夏 提交于 2019-12-29 15:28:16
【推荐】2019 Java 开发者跳槽指南.pdf(吐血整理) >>> 一、Job 需求来源 Job 背景问题 首先我们来看一下 Job 的需求来源。我们知道 K8s 里面,最小的调度单元是 Pod,我们可以直接通过 Pod 来运行任务进程。这样做将会产生以下几种问题: 我们如何保证 Pod 内进程正确的结束? 如何保证进程运行失败后重试? 如何管理多个任务,且任务之间有依赖关系? 如何并行地运行任务,并管理任务的队列大小? Job:管理任务的控制器 我们来看一下 Kubernetes 的 Job 为我们提供了什么功能: 首先 kubernetes 的 Job 是一个管理任务的控制器,它可以创建一个或多个 Pod 来指定 Pod 的数量,并可以监控它是否成功地运行或终止; 我们可以根据 Pod 的状态来给 Job 设置重置的方式及重试的次数; 我们还可以根据依赖关系,保证上一个任务运行完成之后再运行下一个任务; 同时还可以控制任务的并行度,根据并行度来确保 Pod 运行过程中的并行次数和总体完成大小。 用例解读 我们根据一个实例来看一下Job是如何来完成下面的应用的。 Job 语法 上图是 Job 最简单的一个 yaml 格式,这里主要新引入了一个 kind 叫 Job,这个 Job 其实就是 job-controller 里面的一种类型。 然后 metadata 里面的 name

CronJob控制器中的一些绕坑指南

人盡茶涼 提交于 2019-12-29 15:24:04
【推荐】2019 Java 开发者跳槽指南.pdf(吐血整理) >>> 背景: 作为企业里唯一熟悉各种云产品的工种,通常需要和各种云产品打交道。当前,我们大部分的云基础设施和云服务都运行在阿里云上,而每个云产品都有独立的管理系统,这使得我们在运维过程中经常无法将相关产品和关联信息有效的组织在一起,来进行快速的问题诊断和信息查询,这对于运维和开发同学来说,在多个系统之间来回跳转查找关联信息是一个低效且极易出错的事务,因此通常来讲,不论是作为运维和开发,我们都希望将企业关联的云资源和服务进行整合关联,以实现效率的最大化。而在这过程中,我们采用Kubernetes集群的CronJob来定期获取阿里云的一些资源,在这过程中,遇到一些问题,根据问题重新细读CronJob官方文档,特记录于此。 <!--more--> CronJob简单介绍 一个 CronJob 对象就像是一个Linux环境的 crontab 文件一样,它会在给定的调度周期(crontab格式)内定期的创建一些job. 注意: 所有的定时任务的调度周期都依赖于k8s的master节点的时区 通常情况下,CronJob对于创建定期和重复的任务非常有用,比如定期的备份和邮件发送之类的任务场景。 当然了,在Kubernetes集群中,Cronjob也有一些局限性和特性,需要详细了解下才能用的比较好。 注意 :

how to kill hadoop jobs

馋奶兔 提交于 2019-12-29 10:13:14
问题 I want to kill all my hadoop jobs automatically when my code encounters an unhandled exception. I am wondering what is the best practice to do it? Thanks 回答1: Depending on the version, do: version <2.3.0 Kill a hadoop job: hadoop job -kill $jobId You can get a list of all jobId's doing: hadoop job -list version >=2.3.0 Kill a hadoop job: yarn application -kill $ApplicationId You can get a list of all ApplicationId's doing: yarn application -list 回答2: Use of folloing command is depreciated

What happens if I am running more subjobs than the number of core allocated

*爱你&永不变心* 提交于 2019-12-25 14:26:58
问题 So I have a sbatch (slurm job scheduler) script in which I am processing a lot of data through 3 scripts: foo1.sh, foo2.sh and foo3.sh. foo1.sh and foo2.sh are independent and I want to run them simultaneously. foo3.sh needs the outputs of foo1.sh and foo2.sh so I am building a dependency. And then I have to repeat it 30 times. Let say: ## Resources config #SBATCH --ntasks=30 #SBATCH --task-per-core=1 for i in {1..30}; do srun -n 1 --jobid=foo1_$i ./foo1.sh & srun -n 1 --jobid=foo2_$i ./foo2

What happens if I am running more subjobs than the number of core allocated

三世轮回 提交于 2019-12-25 14:23:25
问题 So I have a sbatch (slurm job scheduler) script in which I am processing a lot of data through 3 scripts: foo1.sh, foo2.sh and foo3.sh. foo1.sh and foo2.sh are independent and I want to run them simultaneously. foo3.sh needs the outputs of foo1.sh and foo2.sh so I am building a dependency. And then I have to repeat it 30 times. Let say: ## Resources config #SBATCH --ntasks=30 #SBATCH --task-per-core=1 for i in {1..30}; do srun -n 1 --jobid=foo1_$i ./foo1.sh & srun -n 1 --jobid=foo2_$i ./foo2

how to access a file inside logs directory from grails application?

喜欢而已 提交于 2019-12-25 08:04:54
问题 I am simply trying to print the content of the latest log file inside the logs directory from Grails. command.execute().text is returning empty. so I must be doing something wrong. I appreciate any help! Thanks! def command = "cat \$(ls logs/localhost_access_log* | tail -1)" println command.execute().text 回答1: ServletContextHolder.servletContext.getRealPath('/') will kind of help although thats the pah to the running app. You may need to do some verification of path before attempting to get

Stop-AzureVM does not shutdown my Azure-VM (Runbook)

戏子无情 提交于 2019-12-25 01:39:48
问题 Hy I've an Azure VM with Visual Studio installed. When I run the Shutdown script (Runbook)from here: https://gallery.technet.microsoft.com/scriptcenter/Stop-Azure-Virtual-Machine-0b1fea97 Script Status says it is completed but it did not shut down my VM. Output says Shutting down but nothing happens. Any suggestions on this? Thanks for your Help. Peter 回答1: I would suggest you few things- a. Most imp- Go to ASSET tab and add proper windows powershell credentials (simply you can use username

Laravel multiple workers running job twice

北战南征 提交于 2019-12-25 00:41:30
问题 I am using Laravel 5.6 and I am dispatching jobs to a queue and then using supervisor to activate 8 workers on that queue. I was expecting that Laravel will know NOT to run the same job twice but I was surprised to discover that it did. Same job was taken cared of by more than one worker and therefore weird stuff started to happen. The thing is that one year ago I wrote the same mechanism for another Laravel project (but on Laravel version 5.1) and the whole thing worked out of the box. I