twisted

Twisted starting/stopping factory/protocol less noisy log messages

醉酒当歌 提交于 2019-11-29 10:32:11
Is there a way to tell to twistd to not log all factory and protocol start and stop. I use many type of protocols and performs a lot of connections ... and my log file grows a lot. So i'm looking for a simple way to disable those messages. Regards You can set the noisy attribute of a factory to False to prevent it from logging these messages. See also http://twistedmatrix.com/trac/ticket/4021 which will probably be resolved by the next Twisted release. For example, here's a program with two clients, but only one will log its start/stop messages: import sys from twisted.internet import reactor,

Non-blocking file access with Twisted

一世执手 提交于 2019-11-29 09:04:47
I'm trying to figure out if there is a defacto pattern for file access using twisted. Lots of examples I've looked at (twisted.python.log, twisted.persisted.dirdbm, twisted.web.static) actually don't seem to worry about blocking for file access. It seems like there should be some obvious interface, probably inheriting from abstract.FileDescriptor, that all file access should be going through it as a producer/consumer. Have I missed something or is it just that the primary use for twisted in asynchronous programming is for networking and it hasn't really been worked out for other file

How do you create a simple Google Talk Client using the Twisted Words Python library?

妖精的绣舞 提交于 2019-11-29 06:18:28
问题 I am interested in making a Google Talk client using Python and would like to use the Twisted libraries Words module. I have looked at the examples, but they don't work with the current implementation of Google Talk. Has anybody had any luck with this? Would you mind documenting a brief tutorial? As a simple task, I'd like to create a client/bot that tracks the Online time of my various Google Talk accounts so that I can get an aggregate number. I figure I could friend the bot in each account

How to schedule Scrapy crawl execution programmatically

瘦欲@ 提交于 2019-11-29 05:16:54
I want to create a scheduler script to run the same spider multiple times in a sequence. So far I got the following: #!/usr/bin/python3 """Scheduler for spiders.""" import time from scrapy.crawler import CrawlerProcess from scrapy.utils.project import get_project_settings from my_project.spiders.deals import DealsSpider def crawl_job(): """Job to start spiders.""" settings = get_project_settings() process = CrawlerProcess(settings) process.crawl(DealsSpider) process.start() # the script will block here until the end of the crawl if __name__ == '__main__': while True: crawl_job() time.sleep(30)

Getting py2exe to work with zope.interface

泄露秘密 提交于 2019-11-29 04:51:23
I have a Python app based on Twisted and PyGTK. Twisted itself depends on zope.interface, and I don't import it directly. Unfortunately, when I try to run my app, the following error ends up in the error log: Traceback (most recent call last): File "tasks.py", line 4, in <module> File "ui\__init__.pyc", line 14, in <module> File "twisted\python\log.pyc", line 17, in <module> ImportError: No module named zope.interface Traceback (most recent call last): File "tasks.py", line 4, in <module> File "ui\__init__.pyc", line 14, in <module> File "twisted\python\log.pyc", line 17, in <module>

Using Twisted's twisted.web classes, how do I flush my outgoing buffers?

大城市里の小女人 提交于 2019-11-29 04:08:40
问题 I've made a simple http server using Twisted, which sends the Content-Type: multipart/x-mixed-replace header. I'm using this to test an http client which I want to set up to accept a long-term stream. The problem that has arisen is that my client request hangs until the http.Request calls self.finish(), then it receives all multipart documents at once. Is there a way to manually flush the output buffers down to the client? I'm assuming this is why I'm not receiving the individual multipart

(转) Twisted 第四部分: 由Twisted支持的诗歌客户端

倾然丶 夕夏残阳落幕 提交于 2019-11-29 03:21:20
第一个 twisted 支持的诗歌服务器 尽管 Twisted 大多数情况下用来写服务器代码,为了一开始尽量从简单处着手,我们首先从简单的客户端讲起。 让我们来试试使用 Twisted 的客户端。源码在 twisted-client-1/get-poetry.py 。首先像前面一样要开启三个服务器: python blocking-server/slowpoetry.py --port 10000 poetry/ecstasy.txt --num-bytes 30 python blocking-server/slowpoetry.py --port 10001 poetry/fascination.txt python blocking-server/slowpoetry.py --port 10002 poetry/science.txt 并且运行客户端: python twisted-client-1/get-poetry.py 10000 10001 10002 你会看到在客户端的命令行打印出: Task 1: got 60 bytes of poetry from 127.0.0.1:10000 Task 2: got 10 bytes of poetry from 127.0.0.1:10001 Task 3: got 10 bytes of poetry from

twisted的log系统的级别level

本小妞迷上赌 提交于 2019-11-29 03:20:27
首先你需要写一个log观察者类,然后给这观察者一个级别来过滤你想要的log import logging from twisted.python import log class LevelFileLogObserver(log.FileLogObserver): def __init__(self, f, level=logging.INFO): log.FileLogObserver.__init__(self, f) self.logLevel = level def emit(self, eventDict): if eventDict['isError']: level = logging.ERROR elif 'level' in eventDict: level = eventDict['level'] else: level = logging.INFO if level >= self.logLevel: log.FileLogObserver.emit(self, eventDict) '''then you have to register it:''' from twisted.python import logfile f = logfile.LogFile("someFile.log", '/some/path/', rotateLength=1000

在twisted中按天进行日志切分

大兔子大兔子 提交于 2019-11-29 03:20:14
以往使用twsited服务器记录log使用的都是按照大小对日志切分,而现在有一个服务需要对log按照天进行切分,于是研究了一下twisted的日志记录方式,最后终于搞定。这里将分析过程记录下,以帮助后面有同样问题的人。 一 twisted日志记录简介 Twisted通过twisted.python.log 提供了msg, err和startLogging三个函数来进行日志记录,其中startLogging用来打开文件对象来开始日志,msg用来记录信息,err用来记录错误信息。通过twisted.python.logfile提供了BaseLogFile、LogFile和DailyLogFile三个类来同startLogging协同工作。 如果使用twisted来启动程序,twisted会自动的使用LogFile来启动startLogging进行日志记录和切割。后面我们的分析都是基于使用twisted来启动程序在后台运行这一条件。 二 按天切分初步探索 (一) 使用twisted.python.log 正如前面的介绍,我们可以自主使用startLogging来启动日志记录将其记录到指定的文件中。然后以一定的周期来判断是否进入到了新的一天,如果是的话,则主动的进行日志的切分。 该方法需自主进行切分判断、主动切换日志,这显然不是最好的选择了。 (二) 使用twisted.python

How do I catch errors with scrapy so I can do something when I get User Timeout error?

人走茶凉 提交于 2019-11-29 03:16:55
问题 ERROR: Error downloading <GET URL_HERE>: User timeout caused connection failure. I get this issue every now and then when using my scraper. Is there a way I can catch this issue and run a function when it happens? I can't find out how to do it online anywhere. 回答1: What you can do is define an errback in your Request instances: errback (callable) – a function that will be called if any exception was raised while processing the request. This includes pages that failed with 404 HTTP errors and