I found something difficult in Python, which was a bit of a first, so I wrote a whole blog series about it, and now a whole video:
Slides: Python Async Basics slides
Blog posts: asyncio basics, large numbers in parallel, parallel HTTP requests, adding to stdlib
Hi, which version of python you’re using in this video?
I got an error like this in Python 3.7.0 and Python 3.7.3:
import asyncio
async def mycoro(number):
print(“Starting %d” % number)
await asyncio.sleep(1)
print(“Finishing %d” % number)
return str(number)
many = asyncio.gather(mycoro(1),mycoro(2),mycoro(3))
asyncio.run(many)
Traceback (most recent call last):
File “C:/xx/study.py”, line 13, in
asyncio.run(many)
File “C:\Users\xxxx\Python\Python37-32\lib\asyncio\runners.py”, line 37, in run
raise ValueError(“a coroutine was expected, got {!r}”.format(main))
ValueError: a coroutine was expected, got
Hi, it would have been 3.7.something. Try this:
Note the extra “await” before asyncio.gather.
If that works, I think it’s a mistake I made translating 3.6 -> 3.7 – well spotted, and apologies!
Hi Andy,
This is best explanation so far I’ve been able to find for asyncio in a nutshell.. And I’ve been able to implement it in my small project, all thanks to you.
I’ve encountered an issue though, while trying to use https proxies with my requests.
It says, “ValueError: Only http proxies are supported”
After browsing through some github issues, I found the notorious issue #845 >>https://github.com/aio-libs/aiohttp/issues/845, which states that https proxies are not supported by aiohttp..
Do you have a way around this lil’ hurdle?
Hi Ronnie, thank you for the kind words – I’m so glad it’s been helpful. I’m afraid I have absolutely no idea how to make aiohttp work for https proxies – sorry!
Hi Andy,
Thanks for your Intro on Async with Python. It’s one of the best Intros to python async programming
I have a task to download the content of 5k-10k files in one go. I need to store their contents in a list of dictionaries where each dict contains the url and its corresponding content. I have tried aiohttp and asyncio following some basic tutorials. But I have ended up with a code which fetches the contents of 120-140 files in around 300 seconds and then stops execution without stating any error(seems like a session timeout but I’m unable to proceed further) . As of now, I’m using synchronous way to download those thousands of files using requests, but it is taking way too long(6.5hrs for around 7.5k file contents). Can you please help me with this task to download few thousands of files asynchronously to save maximum amount of time. Currently I can use only Python 3.7
Hi, I expect you are opening _all_ the connections at the same time, and that is what is causing the problem. My blog post https://www.artificialworlds.net/blog/2017/06/12/making-100-million-requests-with-python-aiohttp/ is about how to do this without opening all connections at the same time, so it might be helpful. You might need to read the other posts it mentions too.