-
Notifications
You must be signed in to change notification settings - Fork 3.2k
Description
- Package Name:
azure-core (pipeline) - Package Version:
latest - Operating System:
Windows 10 Enterprise (1909) . But this seems to be platform independent issue. - Python Version:
platform win32 -- Python 3.7.7, pytest-5.4.1, py-1.8.1, pluggy-0.13.1 -- C:\git\azure-sdk-for-python\venv\Scripts\python.exe - References
Root cause analysis brought me to these issues filled against Python httplib. Confirmed experimentally (see below).
https://bugs.python.org/issue21790
https://bugs.python.org/issue31945
https://stackoverflow.com/questions/48719893/why-is-the-block-size-for-python-httplibs-reads-hard-coded-as-8192-bytes
Describe the bug
When trying to push large amount of data (4000MB in my case) that's "readable stream" (e.g. BytesIO or file reader - anything implementing "read") then the upload speed caps at around 8Mbps.
(For the context I'm working on 4000MB block upload support for Azure Storage SDK).
To Reproduce
Execute test_put_block_stream_large with LARGE_BLOCK_SIZE bumped to some large value (i.e. 4000MB upcoming , or 100MB currently supported threshold).
OR
Use scenario from my fork as reference.
Expected behavior
Upload speed of "readable" data is not capped by httpclient and can leverage full network bandwith available.
Possible solution
The https://bugs.python.org/msg305571 suggest quite handy workaround that could be part of pipeline I guess. So far I didn't see any way to inject different blocksize to httpclient.
Screenshots
Original test:
I was uploading 4000MB of data in single request without any modifications using "readable" stream. That took over 1 hour!!


Turns out httpclient is using 8192 byte buffer when readable stream is passed:
Then I started to play with blocksize. I was editing http client's source and bumping the blocksize.
After bumpting to 8192*1024 upload speed was more than 2X faster


And after bumping it to 1081921024 I managed to upload that payload in ~4 and half minute.

Additional context
This is going to impact future users of "large block"/"large blob" (4000MB new limit for single block / 200TB limit for single blob). Users of that feature are most likely work with streams - either uploading data from network or data produced on the fly by computations. Therefore it's important to address this deficiency.


