-
Notifications
You must be signed in to change notification settings - Fork 759
Fix large file upload freeze with adaptive buffer sizing #869
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Co-authored-by: houseme <[email protected]>
Co-authored-by: houseme <[email protected]>
Dependency Review✅ No vulnerabilities or license issues or OpenSSF Scorecard issues found.Scanned Files
|
|
@copilot Please read the error message of the action and make repairs and improvements. |
…ader::with_capacity Co-authored-by: houseme <[email protected]>
Co-authored-by: houseme <[email protected]>
Compilation errors have been fixed. The problem is that Solution: Wrap StreamReader with 'tokio::io::BufReader::with_capacity()' to achieve the same 1MB buffer effect. Repair submission: a897a35 This not only maintains the original performance optimization (99.2% reduction in system calls), but also ensures that the code compiles. 已修复编译错误。问题是 解决方案: 使用 修复提交: a897a35 这样既保持了原有的性能优化(99.2% 系统调用减少),又确保代码能够编译通过。 |
|
@copilot How to dynamically set the buffer size for further improvement. ### Future Improvements
1. **Adaptive Buffer Sizing**: Consider adjusting buffer size based on file size
- Small files (< 1MB): 8KB-64KB buffer
- Medium files (1MB-100MB): 256KB buffer
- Large files (> 100MB): 1MB+ buffer |
Co-authored-by: houseme <[email protected]>
... Adaptive buffer size feature has been implemented. Implementation method: The
Advantages:
Repair submission: 538cb8e It has been applied in three key locations: 已实现自适应缓冲区大小功能。 实现方式:
优势:
修复提交: 538cb8e 已在三个关键位置应用: |
#871) * Initial plan * Fix: Trigger self-heal when missing shards detected during read - Added proactive heal detection in get_object_with_fileinfo - When reading an object, now checks if any shards are missing even if read succeeds - Sends low-priority heal request to reconstruct missing shards on rejoined nodes - This fixes the issue where data written during node outage is not healed when node rejoins Co-authored-by: houseme <[email protected]> * fix * Unify CRC implementations to crc-fast (#873) * Initial plan * Replace CRC libraries with unified crc-fast implementation Co-authored-by: houseme <[email protected]> * fix * fix: replace low to Normal --------- Co-authored-by: copilot-swe-agent[bot] <[email protected]> Co-authored-by: houseme <[email protected]> Co-authored-by: houseme <[email protected]> --------- Co-authored-by: copilot-swe-agent[bot] <[email protected]> Co-authored-by: houseme <[email protected]> Co-authored-by: houseme <[email protected]>
Type of Change
Related Issues
Summary of Changes
Consecutive uploads of 10-20GB files freeze with
AwsChunkedStreamError: Underlying: error reading a body from connection. Root cause:StreamReader::new()uses 8KB default buffer, causing ~1.3M syscalls per 10GB file and connection timeouts under AWS chunked encoding overhead.Solution: Wrap
StreamReaderwithtokio::io::BufReader::with_capacity()using adaptive buffer sizing based on file size to optimize both performance and memory usage.Changes Made
Modified
rustfs/src/storage/ecfs.rsin three critical locations:put_object(line ~2369): standard uploadsput_object_extract(line ~404): archive extractionupload_part(line ~2897): multipart uploadsAdded adaptive buffer sizing helper function and comprehensive unit tests.
Implementation Details
Since
tokio-util 0.7.17doesn't provideStreamReader::with_capacity(), the fix usestokio::io::BufReader::with_capacity()to wrap theStreamReaderwith an adaptively sized buffer.Adaptive Buffer Sizing Strategy:
Before
After
Checklist
make pre-commitImpact
Additional Notes
Performance Impact by File Size
Key Improvements
Testing
test_adaptive_buffer_size()unit test covering all buffer size rangesDocumentation
Created
docs/fix-large-file-upload-freeze.mdwith comprehensive technical documentation including:Thank you for your contribution! Please ensure your PR follows the community standards (CODE_OF_CONDUCT.md) and sign the CLA if this is your first contribution.
Original prompt
💬 We'd love your input! Share your thoughts on Copilot coding agent in our 2 minute survey.