celery

How do I enable remote celery debugging in PyCharm?

廉价感情. 提交于 2019-11-29 23:04:17
I'm trying to find some instructions on how to enable PyCharm debugging within my celery processes on a remote machine. The remote machine is running Ubuntu 14.04. I am running PyCharm 4.x. I've seen some other information that alludes others have it working, but haven't been able to locate any proper instructions. You can have a Run Configuration to run your celery workers which then allows you to debug simply by clicking the debug button. Here is how I set that up in PyCharm 5: You need to set up a remote python interpreter and then set other configs like the image above. Note that the

Celery的使用完成异步任务与定时任务

萝らか妹 提交于 2019-11-29 22:11:10
0917自我总结 Celery的使用 一.官方文档 Celery 官网:http://www.celeryproject.org/ Celery 官方文档英文版:http://docs.celeryproject.org/en/latest/index.html Celery 官方文档中文版:http://docs.jinkan.org/docs/celery/ 二.Celery架构 Celery的架构由三部分组成,消息中间件(message broker)、任务执行单元(worker)和 任务执行结果存储(task result store)组成。 消息中间件 Celery本身不提供消息服务,但是可以方便的和第三方提供的消息中间件集成。包括,RabbitMQ, Redis等等 任务执行单元 Worker是Celery提供的任务执行的单元,worker并发的运行在分布式的系统节点中。 任务结果存储 Task result store用来存储Worker执行的任务的结果,Celery支持以不同方式存储任务的结果,包括AMQP, redis等 使用场景 异步任务:将耗时操作任务提交给Celery去异步执行,比如发送短信/邮件、消息推送、音视频处理等等 定时任务:定时执行某件事情,比如每天数据统计 三.Celery的安装配置 pip install celery 消息中间件

我的网站搭建 (第十七天) celery 定时刷新缓存

泄露秘密 提交于 2019-11-29 21:50:07
当网站使用redis缓存时,就会涉及到缓存的过期时间,redis数据库中的内容就会消失。这个时候进行用户操作又会变慢,所以要采用一种办法,当缓存刚好要过期时,能够使得redis数据库自动对缓存内容进行更新。这个办法就是使用 celery,具体配置及使用我已经先在 Django框架17: Celery的使用 中总结完毕,一般按着步骤实现就可以了,这里我只将定时刷新的功能实现一下。 按照 Django框架17: Celery的使用 配置好后,在需要添加任务的app/tasks.py添加: from __future__ import absolute_import from celery import shared_task from read_statistics.utils import * @shared_task def get_post_list(): """ 缓存博客列表 """ post_list = Post.objects.filter(Q(display=0) | Q(display__isnull=True)) # 30*60表示30秒*60,也就是半小时 cache.set('post_list', post_list, 30 * 60) @shared_task def get_new_publish(): """ 缓存最新发表的15篇博客 """ new

In-Memory broker for celery unit tests

狂风中的少年 提交于 2019-11-29 21:30:38
I have a REST API written in Django, with and endpoint that queues a celery task when posting to it. The response contains the task id which I'd like to use to test that the task is created and get the result. So, I'd like to do something like: def test_async_job(): response = self.client.post("/api/jobs/", some_test_data, format="json") task_id = response.data['task_id'] result = my_task.AsyncResult(task_id).get() self.assertEquals(result, ...) I obviously don't want to have to run a celery worker to run the unit tests, I expect to mock it somehow. I can't use CELERY_ALWAYS_EAGER because that

使用dockerfile 搭建django系统(nginx+redis+mongodb+celery)

旧城冷巷雨未停 提交于 2019-11-29 21:29:59
背景 有需求需要对django系统进行docker化,以达到灵活部署和容灾。该系统基于django 2.2版本开发,数据库采用mongodb,服务器使用nginx,因系统有部分异步任务,异步任务则采用clelery+redis实现。 基于该需求,所采用的思路是:“基于ubuntu16.04”源镜像,根据dockerfile制作各个运行环境的镜像。因docker提倡单应用单镜像,故这里将django源代码程序作为一个镜像、mongodb作为一个镜像、nginx作为一个镜像、redis作为一个镜像。并最终使用docker-compose对这些镜像做编排。(假设当前已了解docker与docker-compose知识) 实现 下面就是一步步制作docker镜像了。关于各个镜像的Dockerfile模板,这里有一个非常好用的 网站 ,可在网站中搜索自己感兴趣的项目,得到其Dockerfile。假设ubuntu16.04的源镜像及版本名为:ubuntu:16.04。 首先我们在宿主机(宿主机为ubuntu16.04系统,用户为user)中建立一个父文件夹例如名为vs,其中的目录如下: mongodb_vs: 存放mongod的数据、配置文件与dockerfile文件; vsapp: 存放django系统的源代码、相关配置文件与dockerfile文件; redis_vs:

Best way to map a generated list to a task in celery

拈花ヽ惹草 提交于 2019-11-29 21:06:47
问题 I am looking for some advice as to the best way to map a list generated from a task to another task in celery. Let's say I have a task called parse , which parses a PDF document and outputs a list of pages. Each page then needs to be individually passed to another task called feed . This all needs to go inside a task called process So, one way I could do that is this: @celery.task def process: pages = parse.s(path_to_pdf).get() feed.map(pages) Of course, that is not a good idea because I am

巧用自动化测试组合拳保证产品质量

生来就可爱ヽ(ⅴ<●) 提交于 2019-11-29 21:06:04
“如何保证质量”一直是产品或项目过程中关注的焦点,而测试是产品质量把控环节中非常关键的部分。本文结合我们的实践经验,总结出一套有效的自动化测试组合拳。 一、背景 我们的测试工作经历了以下四个阶段。 第一阶段,产品需求评审完成,开发团队实现功能开发,然后草草提测,不写单元测试。测试人员进行人工测试,没有工具或系统做辅助,测试用例编写是在excel或脑图中呈现。这个阶段只对业务熟悉,开发只关注功能实现。 第二阶段,产品需求评审完成,开发团队实现功能开发,写自身功能相关的单元测试,组长review组内代码。测试方面,依然处于人工检测功能测试阶段,但开始有一些相关的小工具辅助测试。在两轮或多轮测试情况下,回归一直是一个问题,还有分支测试完成,主干回归的过程,测试环境、预发布环境、灰度环境、线上环境等测试回归效率很低,人工测试在这方面的不足格外明显。 第三阶段,随着业务的发展产品功能需要快速上线,同时系统技术不断迭代,质量也面临着从未有过的挑战,人肉战术不是长久之计。在此阶段部门做了很多改进,引入和开发了很多测试辅助工具,如项目管理工具、测试用例管理工具、BUG管理工具、自动发布系统、自动打包等。 搭建测试用例管理工具,方便编写及后期跟踪用例。一轮二轮测试人员如何分配;用例状态的管理是通过、挂起还是失败,一目了然。 BUG管理工具,主要是给开发和测试人员使用

What's the equivalent of Python's Celery project for Java?

情到浓时终转凉″ 提交于 2019-11-29 20:48:26
I am trying to find an equivalent of Celery project for Java environment, I have looked at Spring Batch, but are there any better alternatives for distributed task queues. Thanks. Adam Gent What Celery is doing is very much akin to EIP , and SEDA with convenient task scheduling... (all you have left to do is add some DB, and async HTTP networking and you have got a complete enterprise quality stack). Basically in Java there is the Spring way, the Java EE way, and the Hadoop way: Spring: Spring Integration + Spring Batch + RabbitMQ Java EE: Mule + Quartz or EJB Scheduling + HornetMQ Hadoop:

AttributeError: 'Flask' object has no attribute 'user_options'

对着背影说爱祢 提交于 2019-11-29 20:37:36
I am trying to setup this basic example from the following doc: http://flask.pocoo.org/docs/patterns/celery/ But so far I keep getting the below error: AttributeError: 'Flask' object has no attribute 'user_options' I am using celery 3.1.15. from celery import Celery def make_celery(app): celery = Celery(app.import_name, broker=app.config['CELERY_BROKER_URL']) celery.conf.update(app.config) TaskBase = celery.Task class ContextTask(TaskBase): abstract = True def __call__(self, *args, **kwargs): with app.app_context(): return TaskBase.__call__(self, *args, **kwargs) celery.Task = ContextTask

Can a celery worker/server accept tasks from a non celery producer?

我只是一个虾纸丫 提交于 2019-11-29 20:29:13
问题 I want to use a comet server written using java nio for sending out live updates. When receiving information I want it to scan the data, and send tasks to worker threads via rabbitmq. Ideally I would like a celery server to sit on the other end of rabbit, managing a pool of worker threads that will handle these tasks. However, from my understanding, celery works by sitting on both ends of rabbitmq, and it essentially takes over the role of producer and consumer by being embedded in both the