Make concurrent connections respect limits#581
Conversation
aiohttp/connector.py
Outdated
There was a problem hiding this comment.
I suspect you should respect self._acquired too.
|
@asvetlov I updated the check for available connections based on your feedback. My original test didn't catch the problem because I was only checking the case where all connections were requested up front. I've now updated it to make more than Sorry it took me a while to get back to this. I had prepared the changes, but then our new baby decided it was time to arrive. I got a little distracted for a few days. :-) |
|
I wish happy long life for your baby. |
There was a problem hiding this comment.
Can we make short-circuit here?
If no available connections then create future and wait for it, pass the next lines otherwise.
|
Thanks a lot! |
When I tried to use
aiohttp.ClientSessionto make many concurrent requests to the same server, I found that I ended up with more connections than I expected.Using the current master branch, the script below makes 3 connections for 3 requests, despite setting a limit of 1 on the connector. The problem seems to be that connections aren't counted against the limit until after the underlying trasport has been created. This creates a race condition where
connectyields from_create_connectionbefore recording anywhere that one of the available connections has been consumed. Subsequent calls toconnectthat begin before the earlier_create_connnectioncall has returned are able to create an unlimited number of connections.This pull request includes a new test that fails on the current master branch, but passes with my changes to the connector. I'm also including the first version of the test that I wrote below, because I was surprised when it actually passed on current master. I want to highlight the use of a mock that returns a done future in place of a coroutine, in this case
_create_connection. I copied this pattern fromtest_connect_with_limitand it broke my test because while a function that returns a done future can be yielded from as if it were a coroutine, it never actually yields control back to the event loop. The result in this case was that all tasks ran sequentially rather than concurrently as they would in real use. I replaced the mock with a real coroutine and the test failed as expected. I mention all this because I'm concerned about this pattern potentially masking other problems elsewhere in the tests.