Add containers streaming API#2347
Conversation
Codecov Report
@@ Coverage Diff @@
## master #2347 +/- ##
==========================================
+ Coverage 44.76% 49.13% +4.37%
==========================================
Files 93 84 -9
Lines 9555 7449 -2106
==========================================
- Hits 4277 3660 -617
+ Misses 4585 3114 -1471
+ Partials 693 675 -18
Continue to review full report at Codecov.
|
There was a problem hiding this comment.
change to ListStreamMessage as response type
248a544 to
bf152fc
Compare
|
The api seems cleaner if we can change And for |
|
I wonder if we could make the call and if we get a not found, we fallback to the List implementation? |
2dbcfd5 to
3ad1e87
Compare
|
Ok, pushed an update that cleaned this up more. Now the method name on the Store is just |
|
What is the advantage of the streaming approach vs adding limit/offset to the request? My primary concern is that we will need to do the same change for content, snapshotters, and anything else which returns a list. This will establish the API pattern of how we return these lists. |
|
@dmcgowan ya I know. I think streaming is a little easier to handle state server side than managing when a request has already seen what containers/etc. Because containers, snapshotters, etc can come and go, with an offset, how do you manage that between requests? We still have some of the problem with streaming where it can return a container that was later removed but we can atleast say that the stream returns all valid containers at the time the stream was initiated. Now that I think of it more, it would almost be better that some of the list commands support just returning the |
|
I think that just returning the IDs and doing thousands of get after is not the approach I like the most. I didn't took a look at the details of the container structure but, having seen a 1GB response for 5k containers (200kb per container), surely there could be an "in between" approach with a stripped down The streaming approach as you (@crosbymichael) describe is exactly what I intuitively assumed from the name |
|
@mcluseau actually if we remove the This is another option but we would still have a limit based on number of containers at some point, even if that point is 10k-100k containers. |
|
@crosbymichael I like the ListStream approach, reducing the response size is more at the optimization level (a non-primary goal; if the list can be done the manager can clean the containers). If container IDs are UUID, 16 bytes, then the limit is already 1M containers with the current default limit. And even if we consider a limit of 10k, we're talking about 10k gets as soon as we want the name or the image the container is running. We may then introduce a batch get but that's kind of coming back to the issue of "how many is safe". Streaming is, I think, the safest and simplest solution. |
|
@crosbymichael Just to confirm. This won't break backward compatibility, right? Docker CE 17.11 will still work with containerd 1.2, right? Just 1.2 containerd client won't work with older containerd version. |
|
@Random-Liu it should not break compat. But I will also verify |
|
@crosbymichael Are we going to use this in Docker? Do we need to handle |
|
I'll update this PR this week to handle fallback. |
There was a problem hiding this comment.
I'd do a repeated here so that the server can block messages efficiently.
There was a problem hiding this comment.
it's hard to tell the size of the serialized proto so multiple will just introduce the same issues
There was a problem hiding this comment.
It's only returning one still.
There was a problem hiding this comment.
Should probably define the error semantics for per container, as well as stream shutdown.
There was a problem hiding this comment.
I'm thinking, from our discussions, we will just hide this behind the List method. We never really wanted it to be a constant stream of containers, just an efficient way to return many containers with grpc. If I hide it behind the List method then we will just do the fallback if the stream rpc does not exist.
There was a problem hiding this comment.
Should this be ListStream to match service in case we add new streaming methods?
|
Looks mostly good. I have a few nits. Hope I didn't re-hash old concerns. |
949dcc9 to
7c1d0ce
Compare
|
Ok, this is ready to be reviewed again. The implementation is done by default in the existing |
There was a problem hiding this comment.
Any reason to allow this to return a partial list? Normally this is a cancellation and return nil, ctx.Err()
7c1d0ce to
0ce6841
Compare
|
LGTM |
|
@stevvooe @Random-Liu @estesp can you review again? |
estesp
left a comment
There was a problem hiding this comment.
minor spelling typo; otherwise LGTM
Signed-off-by: Michael Crosby <[email protected]>
0ce6841 to
400f16f
Compare
|
@estesp updated |
This adds streaming functionality via GRPC to not overload the message size limit. The containers.Store interface needs work because I'm not happy with it but help is needed. We need to make sure the client works will older daemons so we cannot replace the default implementation easily so we we will have figureout how we want to handle this client list command.
Fixes #2320
Signed-off-by: Michael Crosby [email protected]