-
Notifications
You must be signed in to change notification settings - Fork 225
Bump to c/storage v1.58.0, c/image v5.35.0, c/common v0.63.0 #2423
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Bump to c/storage v1.58.0, c/image v5.35.0, c/common v0.63.0 #2423
Conversation
Bump c/storage v1.58.0 c/image v5.35.0 in preparation for Podman v5.5 Signed-off-by: tomsweeneyredhat <[email protected]>
Bump c/common to v0.63.0 to be included in Podman v5.5.0 Signed-off-by: tomsweeneyredhat <[email protected]>
Bump to c/common to the next dev version, v0.64.0-dev Signed-off-by: tomsweeneyredhat <[email protected]>
Reviewer's Guide by SourceryThis pull request bumps the c/storage, c/image, and c/common dependencies in preparation for Podman v5.5. It also updates indirect and vendored dependencies. Updated class diagram for dependency versionsclassDiagram
class go_mod {
-github.com/containers/image/v5 v5.34.4 ...
+github.com/containers/image/v5 v5.35.0
-github.com/containers/storage v1.57.3 ...
+github.com/containers/storage v1.58.0
-github.com/go-openapi/errors v0.22.0
+github.com/go-openapi/errors v0.22.1
-github.com/go-openapi/swag v0.23.0
+github.com/go-openapi/swag v0.23.1
-github.com/mailru/easyjson v0.7.7
+github.com/mailru/easyjson v0.9.0
-github.com/mattn/go-sqlite3 v1.14.24
+github.com/mattn/go-sqlite3 v1.14.27
-github.com/moby/sys/user v0.3.0
+github.com/moby/sys/user v0.4.0
-github.com/sigstore/protobuf-specs v0.4.0
+github.com/sigstore/protobuf-specs v0.4.1
-github.com/sigstore/rekor v1.3.9
+github.com/sigstore/rekor v1.3.10
-github.com/sigstore/sigstore v1.9.1
+github.com/sigstore/sigstore v1.9.3
-go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp v0.58.0
+go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp v0.59.0
-go.opentelemetry.io/otel v1.33.0
+go.opentelemetry.io/otel v1.34.0
-go.opentelemetry.io/otel/metric v1.33.0
+go.opentelemetry.io/otel/metric v1.34.0
-go.opentelemetry.io/otel/trace v1.33.0
+go.opentelemetry.io/otel/trace v1.34.0
-google.golang.org/genproto/googleapis/api v0.0.0-20241219192143-6b3ec007d9bb
+google.golang.org/genproto/googleapis/api v0.0.0-20250303144028-a0af3efb3deb
-google.golang.org/genproto/googleapis/rpc v0.0.0-20250115164207-1a7da9e5054f
+google.golang.org/genproto/googleapis/rpc v0.0.0-20250313205543-e70fdf4c4cb4
-google.golang.org/grpc v1.70.0
+google.golang.org/grpc v1.71.0
-google.golang.org/protobuf v1.36.5
+google.golang.org/protobuf v1.36.6
}
note for go_mod "This class diagram shows the updated dependency versions in go.mod file."
File-Level Changes
Tips and commandsInteracting with Sourcery
Customizing Your ExperienceAccess your dashboard to:
Getting Help
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hey @TomSweeneyRedHat - I've reviewed your changes - here's some feedback:
Overall Comments:
- Make sure to run
go mod tidyto keep the go.mod and go.sum files consistent. - The version bump in
version/version.goshould be done in a separate commit.
Here's what I looked at during the review
- 🟢 General issues: all looks good
- 🟢 Security: all looks good
- 🟢 Testing: all looks good
- 🟡 Complexity: 3 issues found
- 🟢 Documentation: all looks good
Help me be more useful! Please click 👍 or 👎 on each comment and I'll use the feedback to improve your reviews.
| // If a custom decompressor is provided, it takes precedence. The function validates that the decompressed data | ||
| // does not exceed the specified maximum size and returns an error if this limit is exceeded. | ||
| // On success, it returns the decompressed data. Otherwise, it returns an error if decompression fails or the data exceeds the size limit. | ||
| func decompress(compressor encoding.Compressor, d mem.BufferSlice, dc Decompressor, maxReceiveMessageSize int, pool mem.BufferPool) (mem.BufferSlice, error) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
issue (complexity): Consider extracting the custom and standard decompression logic into separate helper functions to reduce nesting and improve error handling in the decompress function.
Consider extracting the custom and standard decompression logic into separate helper functions. This will collapse the nested conditionals and make error handling more uniform. For example:
func decompress(compressor encoding.Compressor, d mem.BufferSlice, dc Decompressor, maxReceiveMessageSize int, pool mem.BufferPool) (mem.BufferSlice, error) {
if dc != nil {
return customDecompress(dc, d, maxReceiveMessageSize)
}
if compressor != nil {
return standardDecompress(compressor, d, maxReceiveMessageSize, pool)
}
return nil, status.Errorf(codes.Internal, "grpc: no decompressor available for compressed payload")
}
func customDecompress(dc Decompressor, d mem.BufferSlice, maxReceiveMessageSize int) (mem.BufferSlice, error) {
uncompressed, err := dc.Do(d.Reader())
if err != nil {
return nil, status.Errorf(codes.Internal, "grpc: failed to decompress the received message: %v", err)
}
if len(uncompressed) > maxReceiveMessageSize {
return nil, status.Errorf(codes.ResourceExhausted, "grpc: message after decompression larger than max (%d vs. %d)",
len(uncompressed), maxReceiveMessageSize)
}
return mem.BufferSlice{mem.SliceBuffer(uncompressed)}, nil
}
func standardDecompress(compressor encoding.Compressor, d mem.BufferSlice, maxReceiveMessageSize int, pool mem.BufferPool) (mem.BufferSlice, error) {
dcReader, err := compressor.Decompress(d.Reader())
if err != nil {
return nil, status.Errorf(codes.Internal, "grpc: failed to decompress the message: %v", err)
}
out, err := mem.ReadAll(io.LimitReader(dcReader, int64(maxReceiveMessageSize)), pool)
if err != nil {
out.Free()
return nil, status.Errorf(codes.Internal, "grpc: failed to read decompressed data: %v", err)
}
if out.Len() == maxReceiveMessageSize && !atEOF(dcReader) {
out.Free()
return nil, status.Errorf(codes.ResourceExhausted, "grpc: received message after decompression larger than max %d", maxReceiveMessageSize)
}
return out, nil
}This refactoring maintains all existing functionality while reducing the complexity of the main decompress function.
| // the end of the UpdateClientConnState operation. If any endpoint has no | ||
| // addresses it will ignore that endpoint. Otherwise, returns first error found | ||
| // from a child, but fully processes the new update. | ||
| func (es *endpointSharding) UpdateClientConnState(state balancer.ClientConnState) error { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
issue (complexity): Consider encapsulating lock handling in helper functions to narrow lock scope and improve code clarity by reducing nested defers and cognitive load related to mutex management, which can be applied to methods like UpdateClientConnState and ResolverError for better lock boundary management and reduced overlapping lock scopes..
Consider encapsulating the lock-handling in small helper functions to narrow the scope of each lock. This can help reduce the cognitive load of tracking when each mutex is held or released, and it can also eliminate some nested defers. For example, you could add generic helpers for the two locked sections:
func (es *endpointSharding) withChildLock(fn func()) {
es.childMu.Lock()
defer es.childMu.Unlock()
fn()
}
func (es *endpointSharding) withStateLock(fn func()) {
es.mu.Lock()
defer es.mu.Unlock()
fn()
}Then refactor methods like UpdateClientConnState to use these helpers. For instance:
func (es *endpointSharding) UpdateClientConnState(state balancer.ClientConnState) error {
var ret error
es.withChildLock(func() {
es.inhibitChildUpdates.Store(true)
defer func() {
es.inhibitChildUpdates.Store(false)
}()
// Update and create new children.
children := es.children.Load()
newChildren := resolver.NewEndpointMap()
for _, endpoint := range state.ResolverState.Endpoints {
if _, ok := newChildren.Get(endpoint); ok {
continue
}
var childBalancer *balancerWrapper
if val, ok := children.Get(endpoint); ok {
childBalancer = val.(*balancerWrapper)
es.withStateLock(func() {
childBalancer.childState.Endpoint = endpoint
})
} else {
childBalancer = &balancerWrapper{
childState: ChildState{Endpoint: endpoint},
ClientConn: es.cc,
es: es,
}
childBalancer.childState.Balancer = childBalancer
childBalancer.child = es.childBuilder(childBalancer, es.bOpts)
}
newChildren.Set(endpoint, childBalancer)
if err := childBalancer.updateClientConnStateLocked(balancer.ClientConnState{
BalancerConfig: state.BalancerConfig,
ResolverState: resolver.State{
Endpoints: []resolver.Endpoint{endpoint},
Attributes: state.ResolverState.Attributes,
},
}); err != nil && ret == nil {
ret = err
}
}
// Delete removed children.
for _, e := range children.Keys() {
if child, ok := newChildren.Get(e); !ok {
child.(*balancerWrapper).closeLocked()
}
}
es.children.Store(newChildren)
})
es.updateState()
if newChildren := es.children.Load(); newChildren.Len() == 0 {
return balancer.ErrBadResolverState
}
return ret
}This refactoring preserves functionality but clarifies locking boundaries by centralizing the lock management. Repeat similar shifting for ResolverError and other methods to reduce overlapping lock scopes and nested defers.
| // target address as the attribute along with user info. It returns nil if | ||
| // either resolver has not sent update even once and returns the error from | ||
| // ClientConn update once both resolvers have sent update atleast once. | ||
| func (r *delegatingResolver) updateClientConnStateLocked() error { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
issue (complexity): Consider extracting the nested loops in updateClientConnStateLocked into helper functions to improve readability by isolating the combination logic and making the state update easier to follow, without changing functionality
Below is one actionable approach to reducing complexity without changing functionality: extract the nested loops in updateClientConnStateLocked into helper functions. This isolates the combination logic and makes the state update easier to follow. For example, you could refactor as follows:
func (r *delegatingResolver) combineAddresses(proxyAddr resolver.Address, targetAddrs []resolver.Address) []resolver.Address {
var addresses []resolver.Address
for _, tAddr := range targetAddrs {
addresses = append(addresses, proxyattributes.Set(proxyAddr, proxyattributes.Options{
User: r.proxyURL.User,
ConnectAddr: tAddr.Addr,
}))
}
return addresses
}
func (r *delegatingResolver) combineEndpoints(proxyAddrs []resolver.Address, endpoints []resolver.Endpoint) []resolver.Endpoint {
var combined []resolver.Endpoint
for _, ep := range endpoints {
var addrs []resolver.Address
for _, pAddr := range proxyAddrs {
for _, tAddr := range ep.Addresses {
addrs = append(addrs, proxyattributes.Set(pAddr, proxyattributes.Options{
User: r.proxyURL.User,
ConnectAddr: tAddr.Addr,
}))
}
}
combined = append(combined, resolver.Endpoint{Addresses: addrs})
}
return combined
}Then, update updateClientConnStateLocked as follows:
func (r *delegatingResolver) updateClientConnStateLocked() error {
if r.targetResolverState == nil || r.proxyAddrs == nil {
return nil
}
curState := *r.targetResolverState
var proxyAddr resolver.Address
if len(r.proxyAddrs) == 1 {
proxyAddr = r.proxyAddrs[0]
} else {
proxyAddr = resolver.Address{Addr: r.proxyURL.Host}
}
curState.Addresses = r.combineAddresses(proxyAddr, curState.Addresses)
curState.Endpoints = r.combineEndpoints(r.proxyAddrs, curState.Endpoints)
return r.cc.UpdateState(curState)
}This change reduces the inline nested loops and clarifies the responsibilities of each helper function while retaining the original behavior.
|
Podman Test PR: containers/podman#25900 |
|
Happy Green Test Buttons, and the Podman PR passed tests. Ready to go. If we want to track down the suggestions, I'm happy to do that in a follow up, but timing wise I'd like to push this across the line now. |
Luap99
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
|
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: Luap99, sourcery-ai[bot], TomSweeneyRedHat The full list of commands accepted by this bot can be found here. The pull request process is described here DetailsNeeds approval from an approver in each of these files:
Approvers can indicate their approval by writing |
|
Can we turn the Sourcery thing off for vendor/? It's not useful to suggest code changes to libraries we don't maintain. /lgtm |
|
@TomSweeneyRedHat Should this still be marked WIP? |
|
@Luap99 Nope, I forgot to remove that last night. Removed now. |
Bump
c/storage v1.58.0
c/image v5.35.0
c/common v0.63.0
in preparation for Podman v5.5
Summary by Sourcery
Upgrade dependencies for Podman v5.5, including updates to c/storage, c/image, and c/common libraries
New Features:
Enhancements: