-
Notifications
You must be signed in to change notification settings - Fork 334
cargohold: use multi-part uploads to S3 #4548
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
|
I wasn't able to figure out the maximum number of lightweight threads the |
I think there is no maximum. Perhaps we should just use |
akshaymankar
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good!
Feel free to tweak the default memory requests and limits separately.
|
It seems our concurrent uploads per second are not very high; so current limits should be okay. We should monitor and increase if we discover OOMkills. |
This PR changes the way assets are uploaded to S3.
This PR makes use of multi-part uploads API via the
amazonka-s3-streaminglibrary.6 MB(not possible to decrease). For comparison, we were using chunks of128 KBfor the body before.Checklist
changelog.dContext:
cargohold needs to be compatible with StorageGRID.
Without this PR cargohold receives 400 Bad request when uploading. It might be related to this comment:
: https://docs.netapp.com/us-en/storagegrid/s3/s3-rest-api-supported-operations-and-limitations.html#common-request-headers
Switching to multi-part uploads seems to fix the problem.
See also https://wearezeta.atlassian.net/browse/WPB-17236