aiohttp

aiohttp.client_exceptions.ClientConnectorError: Cannot connect to host stackoverflow.com:443 ssl:default [Connect call failed ('151.101.193.69', 443)]

你离开我真会死。 提交于 2020-08-15 12:11:06
问题 here is my code: import asyncio import aiohttp async def main(): loop = asyncio.get_event_loop() url = "https://stackoverflow.com/" async with aiohttp.ClientSession(loop=loop) as session: async with session.get(url, timeout=30) as resp: print(resp.status) asyncio.run(main()) if I run it on my computer, everything works, but if I run it on pythonanywhere, I get this error: Traceback (most recent call last): File "/home/0dminnimda/.local/lib/python3.8/site-packages/aiohttp/connector.py", line

How to run an aiohttp web application in a secondary thread

这一生的挚爱 提交于 2020-07-22 05:30:26
问题 The following code taken from the aiohttp docs https://docs.aiohttp.org/en/stable/ does work: from aiohttp import web async def handle(request): name = request.match_info.get('name', "Anonymous") text = "Hello, " + name return web.Response(text=text) app = web.Application() app.add_routes([web.get('/', handle), web.get('/{name}', handle)]) if __name__ == '__main__': web.run_app(app) But having the webserver hijack the main thread is not acceptable: the webserver should be on a separate non

Get aiohttp results as string

不问归期 提交于 2020-06-29 08:28:33
问题 I'm trying to get data from a website using async in python. As an example I used this code (under A Better Coroutine Example): https://www.blog.pythonlibrary.org/2016/07/26/python-3-an-intro-to-asyncio/ Now this works fine, but it writes the binary chunks to a file and I don't want it in a file. I want the resulting data directly. But I currently have a list of coroutine objects which I can not get the data out of. The code: # -*- coding: utf-8 -*- import aiohttp import asyncio import async

Get aiohttp results as string

跟風遠走 提交于 2020-06-29 08:27:11
问题 I'm trying to get data from a website using async in python. As an example I used this code (under A Better Coroutine Example): https://www.blog.pythonlibrary.org/2016/07/26/python-3-an-intro-to-asyncio/ Now this works fine, but it writes the binary chunks to a file and I don't want it in a file. I want the resulting data directly. But I currently have a list of coroutine objects which I can not get the data out of. The code: # -*- coding: utf-8 -*- import aiohttp import asyncio import async

How do I download a large list of URLs in parallel in pyspark?

孤人 提交于 2020-06-16 05:07:22
问题 I have an RDD containing 10000 urls to be fetched. list = ['http://SDFKHSKHGKLHSKLJHGSDFKSJH.com', 'http://google.com', 'http://twitter.com'] urls = sc.parallelize(list) I need to check which urls are broken and preferably fetch the results to a corresponding RDD in Python. I tried this: import asyncio import concurrent.futures import requests async def get(url): with concurrent.futures.ThreadPoolExecutor() as executor: loop = asyncio.get_event_loop() futures = [ loop.run_in_executor(