-
Notifications
You must be signed in to change notification settings - Fork 3.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
proxy: break up writes from the remote writer to avoid grpc limits #11441
proxy: break up writes from the remote writer to avoid grpc limits #11441
Conversation
Hi @jsternberg. Thanks for your PR. I'm waiting for a containerd member to verify that this patch is reasonable to test. If it is, they should reply with Once the patch is verified, the new status will be reflected by the I understand the commands that are listed here. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. |
core/content/proxy/content_writer.go
Outdated
if n < len(p) { | ||
err = io.ErrShortWrite | ||
} | ||
if sz := proto.Size(req); sz > defaults.DefaultMaxSendMsgSize { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Using this variable seemed wrong to me but I couldn't think of any alternative. The max send message size can be changed through a grpc.CallOption which can be arbitrarily set on the grpc connection for containerd. I don't believe there's anyway to retrieve what the real value is for this connection other than just overriding whatever the user determined, setting the value to something conservative enough it would generally always work, or documenting that lowering the max send message size could result in failures.
d134bb3
to
f033405
Compare
core/content/proxy/content_writer.go
Outdated
rw.digest = digest.Digest(resp.Digest) | ||
} | ||
n += written | ||
p = p[written:] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Did you consider slices.Chunk
here? Should make chunk iteration much nicer.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I didn't realize this function existed but it also seems to require go 1.23 and the go.mod for this project is on go 1.22.
I can add an implementation using this function for the future though.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
#11398 is updating it to 1.23.0
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I believe we target 2 last versions of Go (#11377), e.g. as of now its 1.24 and 1.23.
go.mod
directive indicates the minimum Go version that should be used to build the module, this is different from the target Go version.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@mxpv I think the go version in go.mod also limit the usage of go features in newer versions?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I've rebased and updated the PR to use slices.Chunk
.
f033405
to
017ab5f
Compare
The remote content writer proxy already has the capability to break up large files into multiple writes, but the current API doesn't recognize when it's about to exceed the limits and attempts to send the data over grpc in one message instead of breaking it into multiple messages. This changes the behavior of `Write` to automatically break up the size of the content based on the max send message size. Signed-off-by: Jonathan A. Sternberg <[email protected]>
017ab5f
to
f25f36c
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
We should backport this to 2.0. We may need to replace slices.Chunk (only in backport) if it requires 1.23 which 2.0 currently does not seem to require via go.mod.
The remote content writer proxy already has the capability to break up
large files into multiple writes, but the current API doesn't recognize
when it's about to exceed the limits and attempts to send the data over
grpc in one message instead of breaking it into multiple messages.
This changes the behavior of
Write
to automatically break up the sizeof the content based on the max send message size.
Fixes #11440.