Conversation
|
I also fixed some subclass methods which didn't match the parent class parameter declarations. |
aiohttp/web_response.py
Outdated
| if coding.value == 'gzip' else -zlib.MAX_WBITS) | ||
| if len(self._body) > _BODY_LENGTH_THREAD_CUTOFF: | ||
| loop = asyncio.get_event_loop() | ||
| await loop.run_in_executor(None, |
There was a problem hiding this comment.
What if default executor will be process one? That may hurt instead of making things better.
There was a problem hiding this comment.
that's already a problem in aiohttp's use of getaddrinfo
Codecov Report
@@ Coverage Diff @@
## master #3205 +/- ##
==========================================
- Coverage 98.01% 97.94% -0.07%
==========================================
Files 44 44
Lines 8511 8522 +11
Branches 1382 1383 +1
==========================================
+ Hits 8342 8347 +5
- Misses 70 74 +4
- Partials 99 101 +2
Continue to review full report at Codecov.
|
|
hrm, codecov stuck |
…helpers # Conflicts: # aiohttp/web_response.py
|
failed due to: another error: |
aiohttp/web_response.py
Outdated
| if asyncio.iscoroutinefunction(dumps): | ||
| text = await dumps(data) | ||
| elif executor_body_size is not None and \ | ||
| len(data) > executor_body_size: |
There was a problem hiding this comment.
I was curious to try your patch "at home" and found that this check actually ruins all the idea. Imagine the following data: {"users": [{"id": 1, "name": ..., ...few more 50 fields}, ..., ..about 100 more items]}. In this case len will return 1 all the time, but response data is big enough.
Instead, I was able to make async dump easily in place where I'm sure big data will happens and this async dump will be useful. On my application side it's easy to do: I know my data and how big it could be. On aiohttp side it's hard to figure out without tricky inspections which will may negotiate all the profit of this feature.
There was a problem hiding this comment.
sizeof will require you somehow convert objects memory usage into string length, which wouldn't be trivial. And all this complexity just to prevent users explicitly define behaviour on their side with two lines of code?
There was a problem hiding this comment.
Yeah, I agree with users needing to just opt-in by themselves.
As for converting to string length, it's not needed, just bytes it occupies in memory would be enough :)
There was a problem hiding this comment.
hah! wow that was a huge oversight, thanks...however thinking about this, I still need something like a sizeof, I don't know how large the object is without doing the dump ;) I'll have to think about this for a bit
There was a problem hiding this comment.
removing this from this PR, thanks guys and sorry for the distraction...however it remains a problem to be solved. Given a dict, to programmatically determine if it should be dumped in a thread or not. I suppose you just need to decide if you always want to run it in a thread or not.
There was a problem hiding this comment.
No worries :)
One of the solutions might be recursively accumulating the size of stuff in the nested object with interruption of the process once reached some threshold as a flag for compression being needed as an optimization.
|
Please merge with master. |
|
@asvetlov sugar is not the issue. The issue is that you run a lot of tests, which I'm trying to address for a long time now. You should be running test suite in just one mode per job, it'd help dramatically. That's one of the reasons of me pushing towards using tox or smth similar to unify testing layer, maybe parallelise it + possibly distribute across CIs. |
|
@thehesiod this PR needs conflict resolution (rebase or merge) |
…helpers # Conflicts: # tests/test_web_response.py
|
ok I think I fixed it, followed: https://stackoverflow.com/a/23668025. Not sure how that happened as I never touched the submodules |
| compressobj.flush() | ||
| if self._zlib_thread_size is not None and \ | ||
| len(self._body) > self._zlib_thread_size: | ||
| await asyncio.get_event_loop().run_in_executor( |
There was a problem hiding this comment.
Wouldn't it be better to have a thread pool in place?
There was a problem hiding this comment.
I think it would. If someone will set default executor as mutiprocessing one he will get very surprised by aiohttp behaviour. However, inplace thread pool will cause DoS situation - you need to limit overall threads by some sane number.
There was a problem hiding this comment.
Multiprocessing pool as default is deprecated by asyncio: it doesn't work well with API like loop.getaddrinfo() anyway.
There was a problem hiding this comment.
Perhaps ability to pass in thread pool? This can get complicated later if you want to support multiple pools
There was a problem hiding this comment.
Make sense. Feel free to update the PR.
|
Sorry, I broke unit tests trying to resolve merge conflicts. |
asvetlov
left a comment
There was a problem hiding this comment.
Thanks, very close to finish.
Please add versionadded directives in doc.
Adding a test with passing explicit executor would be nice to have also.
|
|
||
| :param int zlib_executor_size: length in bytes which will trigger zlib compression | ||
| of body to happen in an executor | ||
|
|
There was a problem hiding this comment.
Please insert .. versionadded:: 3.5 directive here.
There was a problem hiding this comment.
aiohttp 3.5.0 was not released yet, only 3.5.0 alpha 0.
So please use just 3.5, the last zero should be omitted.
|
|
||
| :param int zlib_executor_size: length in bytes which will trigger zlib compression | ||
| of body to happen in an executor | ||
|
|
There was a problem hiding this comment.
aiohttp 3.5.0 was not released yet, only 3.5.0 alpha 0.
So please use just 3.5, the last zero should be omitted.
docs/web_reference.rst
Outdated
| .. versionadded:: 3.5.1 | ||
|
|
||
| :param int zlib_executor: executor to use for zlib compression | ||
| .. versionadded:: 3.5.1 |
asvetlov
left a comment
There was a problem hiding this comment.
Perfect!
Thank you for the patience!
|
ty guys for the help! |
work on resolving #3201