Skip to content

Conversation

@vvoland
Copy link
Collaborator

@vvoland vvoland commented Aug 24, 2022

Fix a bug which makes the progress handler never end if an error
happens before any ongoing is added.

Signed-off-by: Paweł Gronowski [email protected]

- What I did

- How I did it

- How to verify it

- Description for the changelog

- A picture of a cute animal (not mandatory but encouraged)

select {
case <-ticker.C:
if !ongoing.IsResolved() {
if !done && !ongoing.IsResolved() {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm wondering if <-ctx.Done() needs to return ctx.Err()

But I guess in that case, the last updateFunc() won't be called if we reach <-ctx.Done() (cancel/timeout?). I'm guessing the intent of setting the done is so that we update progress in that case, but would that effectively mean we (~) ignore context cancelation/timeouts and still continue until the next "ticker" ("yeah, yeah, I know, timeout, but let me finish this progress thingy")? 🤔

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, if we don't update the progress then it doesn't have a chance to update to the final state and is stuck on something like "Pushing".
We could just call updateFunc immediately and don't wait for the timer to kick in though.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Also, there's nowhere to return the ctx.Err() to. It's only goroutine internal to this function, so nobody will actually receive it.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Also, there's nowhere to return the ctx.Err() to. It's only goroutine internal to this function, so nobody will actually receive it.

Ah, good point yes 🙈 😂

Fix a bug which makes the progress handler never ending if an error
happens before any ongoing is added.

Signed-off-by: Paweł Gronowski <[email protected]>
Copy link
Collaborator

@thaJeztah thaJeztah left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks! I think that looks cleaner as well. The !done && !ongoing.IsResolved() was a bit scary, as such conditions are easily messed up once more conditions get added.

I left one more question; if that's not a concern, then this LGTM as-is.

}
case <-ctx.Done():
done = true
updateFunc(ctx, ongoing, out, start)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

One last question (before I blame myself for not asking later); do we still need to check ongoing.IsResolved() here? Because we don't know "when" we reach this code. Or would calling updateFunc() be fine if ongoing is not in that state?

Suggested change
updateFunc(ctx, ongoing, out, start)
if ongoing.IsResolved() {
// update progress once more on cancellation or timeout
updateFunc(ctx, ongoing, out, start)
}

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Generally calling the updateFunc is fine even if IsResolved is false. With the current push/pull progress handlers this is useless anyway, as they just loop over all jobs.
The IsResolved was extracted from the initial pull progress code, so I don't really know if I understand the intent behind it correctly, but to me it's just a safeguard before updating the progress when the jobs hasn't been added yet.
If the progress ends completely, then calling updateFunc without any jobs is still useful IMO, because it may add some kind of "Finished" status update or information that there weren't anything to be done. But this isn't something that is really done by the push/pull progress code we have.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for checking! Just wanted to be sure we didn't miss something (and, e.g. get a panic because some data wasn't set 😅)

@thaJeztah
Copy link
Collaborator

Unrelated / orthogonal to this PR; I was wondering: I see some code uses containerd's context-logger, e.g.

log.G(ctx).WithError(err).Error("status check failed")

log.G(ctx).WithError(err).Error("status check failed")

I was wondering; do we actually attach a logger in our code? (And should we make more use of that instead of logrus.Log() ?)

@vvoland
Copy link
Collaborator Author

vvoland commented Aug 24, 2022

I don't think we attach the logger.
Using containerd's log seems a bit weird here, it was probably a mistake and IMO we should remove that.

@thaJeztah
Copy link
Collaborator

Using containerd's log seems a bit weird here, it was probably a mistake and IMO we should remove that.

Yeah, there's something to be set for either; the nice thing of containerd's context-logger is that it's "injected" in the code, so no hard-coded logrus loggers everywhere (which is horrible at times if that makes its way into library code).

OTOH, we haven't used anything like it currently in the Moby code, so from that perspective, removing makes sense as well. Perhaps something we should discuss with upstream as well (if there's any preference)

Copy link
Collaborator

@thaJeztah thaJeztah left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM, thanks!

@thaJeztah
Copy link
Collaborator

This was referenced Sep 1, 2022
@thaJeztah
Copy link
Collaborator

When upstreaming:

combine with one or more of;

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

Status: Done

Development

Successfully merging this pull request may close these issues.

3 participants