Avoid deadlock in unpacker.#3825
Conversation
There was a problem hiding this comment.
Won't this cause problems if there are multiple unpacks going on?
There was a problem hiding this comment.
How about this then? :)
Please note that if the unpacker returns without error, the context won't be cancelled, in theory the updateCh could still be full if there are 128 more unknown layers. :)
There was a problem hiding this comment.
Instead of returning the whole error group, maybe return a function which will wait then cancel?
There was a problem hiding this comment.
eg itself does wait and cancel.
We need cancel and wait when fetch returns error.
There was a problem hiding this comment.
Sorry, I should have been more clear. I agree you need to explicitly cancel on error. However cancel must always be run, and if it hasn't been canceled, then do so after wait (or just always do after wait since double cancellation is fine)
|
Build succeeded.
|
Codecov Report
@@ Coverage Diff @@
## master #3825 +/- ##
=======================================
Coverage 41.98% 41.98%
=======================================
Files 130 130
Lines 14577 14577
=======================================
Hits 6120 6120
Misses 7543 7543
Partials 914 914
Continue to review full report at Codecov.
|
2f28269 to
531eff7
Compare
|
Build succeeded.
|
|
LGTM |
dmcgowan
left a comment
There was a problem hiding this comment.
My previous comment was collapsed, the cancel needs to be updated to always run. I think having a single function which does cancel and wait is good, the cancel can still be called inside as well as in the first revision.
Signed-off-by: Lantao Liu <[email protected]>
531eff7 to
c560591
Compare
How about the new version. :) |
|
Build succeeded.
|
Fixes #3816.
It turns out that some layers are downloaded with error
response.status="206 Partial Content". However, we don't stop unpacker when there are layer download errors, thus it will wait for those layers forever.This PR:
updateChis full./cc @dmcgowan @ungureanuvladvictor
Signed-off-by: Lantao Liu [email protected]