celery

Can I use Python requests with celery?

a 夏天 提交于 2019-12-05 18:25:47
I have the following defined in a celery module named tasks.py with the requests library imported: @celery.task def geturl(url): res = requests.get(url) return res.content Whenever I call the task (either from tasks.py or REPL) with: res = geturl.delay('http://www.google.com') print res.get() Here are the log entries on the celery server: [2012-12-19 18:49:58,400: INFO/MainProcess] Starting new HTTP connection (1): www.google.com [2012-12-19 18:49:58,594: INFO/MainProcess] Starting new HTTP connection (1): www.google.ca [2012-12-19 18:49:58,801: INFO/MainProcess] Task tasks.geturl[48182400

celery beat schedule: run task instantly when start celery beat?

北慕城南 提交于 2019-12-05 17:43:04
问题 If I create a celery beat schedule, using timedelta(days=1) , the first task will be carried out after 24 hours, quote celery beat documentation: Using a timedelta for the schedule means the task will be sent in 30 second intervals ( the first task will be sent 30 seconds after celery beat starts , and then every 30 seconds after the last run). But the fact is that in a lot of situations it's actually important that the the scheduler run the task at launch, But I didn't find an option that

Celery dynamic tasks / hiding Celery implementation behind an interface

你说的曾经没有我的故事 提交于 2019-12-05 17:28:35
I am trying to figure out how to implement my asynchronous jobs with Celery, without tying them to the Celery implementation. If I have an interface that accepts objects to schedule, such as callables (Or an object that wraps a callable): ITaskManager(Interface): def schedule(task): #eventually run task And I might implement it with the treading module: ThreadingTaskManager(object) def schedule(task): Thread(task).start() # or similar But it seems this couldn't be done with celery, am I right? Perhaps one, albeit quite ugly, solution might be to define one celery task which dynamically loads

go解析markdown转成html

可紊 提交于 2019-12-05 17:25:21
一、代码 package main import ( "fmt" "github.com/microcosm-cc/bluemonday" "github.com/russross/blackfriday" "io/ioutil" "os" ) func ReadAll(filePth string) ([]byte, error) { f, err := os.Open(filePth) if err != nil { return nil, err } return ioutil.ReadAll(f) } func MarkdownToHTML(md string) string { myHTMLFlags := 0 | blackfriday.HTML_USE_XHTML | blackfriday.HTML_USE_SMARTYPANTS | blackfriday.HTML_SMARTYPANTS_FRACTIONS | blackfriday.HTML_SMARTYPANTS_DASHES | blackfriday.HTML_SMARTYPANTS_LATEX_DASHES myExtensions := 0 | blackfriday.EXTENSION_NO_INTRA_EMPHASIS | blackfriday.EXTENSION_TABLES |

Celery Storing unrecoverable task failures for later resubmission

妖精的绣舞 提交于 2019-12-05 16:45:00
I'm using the djkombu transport for my local development, but I will probably be using amqp (rabbit) in production. I'd like to be able to iterate over failures of a particular type and resubmit. This would be in the case of something failing on a server or some edge case bug triggered by some new variation in data. So I could be resubmitting jobs up to 12 hours later after some bug is fixed or a third party site is back up. My question is: Is there a way to access old failed jobs via the result backend and simply resubmit them with the same params etc? You can probably access old jobs using:

Consumer Connection error with django and celery+rabbitmq?

自作多情 提交于 2019-12-05 15:59:38
问题 I'm trying to set up celeryd with django and rabbit-mq. So far, I've done the following: Installed celery from pip Installed rabbitmq via the debs available from their repository Added a user and vhost to rabbitmq via rabbitmqctl, as well as permissions for that user Started the rabbitmq-server Installed django-celery via pip Set up django-celery, including its tables Configured the various things in settings.py (BROKER_HOST, BROKER_PORT, BROKER_USER, BROKER_PASSWORD, BROKER_VHOST, as well as

celery

别等时光非礼了梦想. 提交于 2019-12-05 15:57:34
Celery 官方 Celery 官网:http://www.celeryproject.org/ Celery 官方文档英文版:http://docs.celeryproject.org/en/latest/index.html Celery 官方文档中文版:http://docs.jinkan.org/docs/celery/ Celery架构 Celery的架构由三部分组成,消息中间件(message broker)、任务执行单元(worker)和 任务执行结果存储(backend - task result store)组成。 消息中间件 Celery本身不提供消息服务,但是可以方便的和第三方提供的消息中间件集成。包括,RabbitMQ, Redis等等 任务执行单元 Worker是Celery提供的任务执行的单元,worker并发的运行在分布式的系统节点中。 任务结果存储 Task result store用来存储Worker执行的任务的结果,Celery支持以不同方式存储任务的结果,包括AMQP, redis等 使用场景 异步任务:将耗时操作任务提交给Celery去异步执行,比如发送短信/邮件、消息推送、音视频处理等等 定时任务:定时执行某件事情,比如每天数据统计 Celery的安装配置 pip install celery 消息中间件:RabbitMQ/Redis

#SORA#celery原生配置文件研究

ε祈祈猫儿з 提交于 2019-12-05 15:29:37
ps:百度是xxx的走狗 回到正题,今天研究了下用一个py文件作为celery的配置文件,所以,还是参考昨天的例子: http://my.oschina.net/hochikong/blog/396079 我们把celery.py的配置项拿出来,在proj目录中创建celeryconfig.py,内容如下: CELERY_TASK_RESULT_EXPIRES=3600 CELERY_TASK_SERIALIZER='json' CELERY_ACCEPT_CONTENT=['json'] CELERY_RESULT_SERIALIZER='json' 修改celery.py: from __future__ import absolute_import from celery import Celery app = Celery('proj', broker='amqp://guest@localhost//', backend='amqp://guest@loaclhost//', include=['proj.agent']) #app.conf.update( # CELERY_TASK_RESULT_EXPIRES=3600, # CELERY_TASK_SERIALIZER='json', # CELERY_ACCEPT_CONTENT=['json'], #

#SORA#celery研究中的一个小问题

早过忘川 提交于 2019-12-05 15:29:26
sora的rpc机制打算使用celery处理,celery+rabbitmq。最近开始研究它的文档,试着写了段代码; from celery import Celery app = Celery('cagent',backend='redis://localhost',broker='amqp://guest@localhost//') #app.conf.update( # CELERY_TASK_SERIALIZER='json', # CELERY_ACCEPT_CONTENT=['json'], # Ignore other content # CELERY_RESULT_SERIALIZER='json', #) app.conf.CELERY_TASK_SERIALIZER='json' app.conf.CELERY_ACCEPT_CONTENT=['json'] app.conf.CELERY_RESULT_SERIALIZER='json' @app.task def add(x,y): return x+y 对于配置其中的诸如变量CELERY_ACCEPT_CONTENT,你可以简单地使用一个python模块集中配置,你也可以像本例中把配置写进程序,还可以使用configparser读取conf类型的文件去配置 当我试着把 app.conf.CELERY

#SORA#celery实践1

ぃ、小莉子 提交于 2019-12-05 15:29:14
这次研究celery的Next Step部分。 先创建一个python module: mkdir proj cd proj touch __init__.py 在proj目录中创建celery.py: from __future__ import absolute_import from celery import Celery app = Celery('proj', broker='amqp://', backend='amqp://', include=['proj.tasks']) # Optional configuration, see the application user guide. app.conf.update( CELERY_TASK_RESULT_EXPIRES=3600, CELERY_TASK_SERIALIZER='json', CELERY_ACCEPT_CONTENT=['json'], CELERY_RESULT_SERIALIZER='json' ) if __name__ == '__main__': app.start() 解析: app=Celery('proj'),命名这个模块为'proj',详细可参考User Guide的Main Name部分 broker='amqp://',指定broker,这里用的是rabbitmq