twisted

python - log the same thing in two files twisted

偶尔善良 提交于 2021-02-08 10:50:39
问题 I'd like to know if there's a way to log the same thing in two files in twisted. Let's say this is the code, now I'd like the same output going to "logs.log" to be redirected to sys.stdout . if __name__ == "__main__": log.startLogging(open("logs.log", 'a')) log.startLogging(sys.stdout) 回答1: This is absolutely possible and easier than ever before if you're on the latest version of Twisted. from sys import stdout from twisted.logger import Logger, textFileLogObserver, globalLogBeginner # start

python - log the same thing in two files twisted

為{幸葍}努か 提交于 2021-02-08 10:49:40
问题 I'd like to know if there's a way to log the same thing in two files in twisted. Let's say this is the code, now I'd like the same output going to "logs.log" to be redirected to sys.stdout . if __name__ == "__main__": log.startLogging(open("logs.log", 'a')) log.startLogging(sys.stdout) 回答1: This is absolutely possible and easier than ever before if you're on the latest version of Twisted. from sys import stdout from twisted.logger import Logger, textFileLogObserver, globalLogBeginner # start

Nonblocking Scrapy pipeline to database

允我心安 提交于 2021-02-06 11:56:14
问题 I have a web scraper in Scrapy that gets data items. I want to asynchronously insert them into a database as well. For example, I have a transaction that inserts some items into my db using SQLAlchemy Core: def process_item(self, item, spider): with self.connection.begin() as conn: conn.execute(insert(table1).values(item['part1']) conn.execute(insert(table2).values(item['part2']) I understand that it's possible to use SQLAlchemy Core asynchronously with Twisted with alchimia. The

Scrapy run multiple spiders from a script

雨燕双飞 提交于 2021-01-29 15:53:31
问题 Hey following question: I'm having a script I want Scrapy spiders to start from. For that I used a solution from another stack overflow post to integrate the settings so I don't have to overwrite them manually. So until now I'm able to start two crawlers from outside the Scrapy project: from scrapy_bots.update_Database.update_Database.spiders.m import M from scrapy_bots.update_Database.update_Database.spiders.p import P from scrapy.crawler import CrawlerProcess from scrapy.utils.project

How to send data manually using twisted

此生再无相见时 提交于 2021-01-28 00:47:40
问题 I'm new to twisted framework. And I know there are many callback function will trigger automatically When the connection made or lost. But I have no idea how to send the data without those callbacks. For example , I want to put an method custom_write() for sending the data out. def custom_write(self,data): self.transport.write( data) And trigger the function in my main(): method. def main(): try: p_red("I'm Client") f = EchoFactory() reactor.connectTCP("localhost", 8000, f) by the reactor

Accessing Python interpreter from a PyInstaller bundle

吃可爱长大的小学妹 提交于 2020-12-09 03:46:28
问题 I have a program (suppose it is called "PROG") that spawn Pronsole.py (3D Printing). If it is just interpreted by Python, it works good in GNU/Linux and Windows. This is the line that works: self.pronTranspProc=reactor.spawnProcess(self.pronProtProc, pythonPath, [pythonPath, "pronsole.py"], os.environ, self.pronPathPrintrun) When Python is the normal interpreter, "pythonPath" will be just the path to that interpreter, since it is sys.executable. But when a bundle is made with Pyinstaller so

Accessing Python interpreter from a PyInstaller bundle

依然范特西╮ 提交于 2020-12-09 03:44:38
问题 I have a program (suppose it is called "PROG") that spawn Pronsole.py (3D Printing). If it is just interpreted by Python, it works good in GNU/Linux and Windows. This is the line that works: self.pronTranspProc=reactor.spawnProcess(self.pronProtProc, pythonPath, [pythonPath, "pronsole.py"], os.environ, self.pronPathPrintrun) When Python is the normal interpreter, "pythonPath" will be just the path to that interpreter, since it is sys.executable. But when a bundle is made with Pyinstaller so