Add code to handle pagination of parts. Fixes max layer size of 10GB bug#2815
Add code to handle pagination of parts. Fixes max layer size of 10GB bug#2815dmcgowan merged 1 commit intodistribution:mainfrom
Conversation
|
Please sign your commits following these rules: $ git clone -b "issue_2814" [email protected]:bainsy88/distribution.git somewhere
$ cd somewhere
$ git commit --amend -s --no-edit
$ git push -fAmending updates the existing PR. You DO NOT need to open a new one. |
Signed-off-by: Jack Baines <[email protected]>
Codecov Report
@@ Coverage Diff @@
## master #2815 +/- ##
=======================================
Coverage 60.25% 60.25%
=======================================
Files 103 103
Lines 8024 8024
=======================================
Hits 4835 4835
Misses 2546 2546
Partials 643 643
Continue to review full report at Codecov.
|
| if err != nil { | ||
| return nil, parseError(path, err) | ||
| } | ||
| var multiSize int64 |
There was a problem hiding this comment.
This multiSize was never used so is redundant code. The size is calculated by the calling function
|
|
||
| // Writer returns a FileWriter which will store the content written to it | ||
| // at the location designated by "path" after the call to Commit. | ||
| func (d *driver) Writer(ctx context.Context, path string, append bool) (storagedriver.FileWriter, error) { |
There was a problem hiding this comment.
Setting a variable with the name append overwrites the built-in function which meant I couldn't use it further down.
caervs
left a comment
There was a problem hiding this comment.
Execution LGTM but we should test before merging
|
Yeah, agree on testing. I have manually tested for what it's worth. The reason for no unit tests up front is just time constraints, looks like there isn't a framework for mocking out S3 calls at the moment, so would be a fair chunk of work to get that implemented. The S3 interface is huge as well which doesn't make it very easy! |
|
Hi guys! is there a time frame to get this done? I got an use case that precisely needs of this feature enabled, thanks! |
arkodg
left a comment
There was a problem hiding this comment.
LGTM, @bainsy88
can you please run make test and set these env vars https://github.com/docker/distribution/blob/f4506b517a7e9e64cfdcc73c991d9e85b784dc3e/registry/storage/driver/s3-aws/s3_test.go#L107 to make sure the S3 tests run
|
@bainsy88 is there some public image I can test this on? Would be good to get this merged in. |
|
I still see problems when pushing layers bigger then 10GB: /usr/bin/registry_DO_NOT_USE_GC -v
/usr/bin/registry_DO_NOT_USE_GC github.com/docker/distribution v2.8.0I tried to push (my repo with 10gb sample files)
|
|
hi, |
…on of parts. Fixes max layer size of 10GB bug This is a back-port of distribution#2815 Signed-off-by: Flavian Missi <[email protected]>
Add code to handle pagination of parts. Fixes max layer size of 10GB bug
Add code to handle pagination of parts. Fixes max layer size of 10GB bug Signed-off-by: David van der Spek <[email protected]>
Add code to handle pagination of parts. Fixes max layer size of 10GB bug Signed-off-by: David van der Spek <[email protected]>
No description provided.