Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix the race condition during GC of snapshots when client retries #10723

Merged
merged 1 commit into from
Oct 3, 2024

Conversation

saketjajoo
Copy link
Contributor

When an upstream client (e.g. kubelet) stops or restarts, the CRI connection to the containerd gets interrupted which is treated as a cancellation of context which subsequently cancels an ongoing operation, including an image pull. This generally gets followed by containerd's GC routine that tries to delete the prepared snapshots for the image layer(s) corresponding to the image in the pull operation that got cancelled. However, if the upstream client immediately retries (or starts a new) image pull operation, containerd initiates a new image pull and starts unpacking the image layers into snapshots. This may create a race condition: the GC routine (corresponding to the failed image pull operation) trying to clean up the same snapshot that the new image pull operation is preparing, thus leading to the "parent snapshot does not exist: not found" error.

Race Condition Scenario:

Assume an image consisting of 2 layers (L1 and L2, L1 being the bottom layer) that are supposed to get unpacked into snapshots S1 and S2 respectively.

During an image pull operation, containerd unpacks(L1) which involves Stat()'ing the chainID. This Stat() fails as the chainID does not exist and Prepare(L1) gets called. Once S1 gets prepared, containerd processes L2 - unpack(L2) which again involves Stat()'ing the chainID which fails as the chainID for S2 does not exist which results in the call to Prepare(L2). However, if the image pull operation gets cancelled before Prepare(L2) is called, then the GC routine tries to clean up S1.

When the image pull operation is retried by the upstream client, containerd follows the same series of operations. unpack(L1) gets called which then calls Stat(chainID) for L1. However, this time, Stat(L1) succedes as S1 already exists (from the previous image pull operation) and thus containerd goes to the next iteration to unpack(L2). Now, GC cleans up S1 and when Prepare(L2) gets called, it returns back the "parent snapshot does not exist: not found" error.

Fix:

Removing the "Stat() + early return" fixes the race condition. Now during the image pull operation corresponding to the client retry, although the chainID (for L1) already exists, containerd does not return early and goes on to Prepare(L1). Since L1 is already prepared, it adds a new lease to S1 and then returns ErrAlreadyExists. This new lease prevents GC from cleaning up S1 when containerd processes L2 (unpack(L2) -> Prepare(L2)).

Fixes: #3787

@k8s-ci-robot
Copy link

Hi @saketjajoo. Thanks for your PR.

I'm waiting for a containerd member to verify that this patch is reasonable to test. If it is, they should reply with /ok-to-test on its own line. Until that is done, I will not automatically test new commits in this PR, but the usual testing commands by org members will still work. Regular contributors should join the org to skip this step.

Once the patch is verified, the new status will be reflected by the ok-to-test label.

I understand the commands that are listed here.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

@samuelkarp
Copy link
Member

/ok-to-test

@saketjajoo
Copy link
Contributor Author

/retest

@saketjajoo saketjajoo force-pushed the 3787-race-condition-fix branch 2 times, most recently from a26e9b7 to 74eddad Compare October 1, 2024 20:28
@saketjajoo saketjajoo force-pushed the 3787-race-condition-fix branch from 74eddad to 1588b7a Compare October 1, 2024 21:21
@samuelkarp samuelkarp added cherry-pick/1.6.x Change to be cherry picked to release/1.6 branch cherry-pick/1.7.x Change to be cherry picked to release/1.7 branch labels Oct 1, 2024
@dmcgowan
Copy link
Member

dmcgowan commented Oct 2, 2024

@saketjajoo are you able to reliable get this test to fail without the change? I was not able to get it to fail. The test is a bit complex and if it not validating the fix, I'm not sure we should include it.

@saketjajoo
Copy link
Contributor Author

@dmcgowan The race condition is a bit difficult to replicate. For testing manually, I had to add a time.Sleep() in the unpack function after Stat but before Prepare. I don't think we can reliably test the race condition behaviour in the test (unless we make some non-trivial changes in the client.Pull() --> Unpack() codepath).

This test aims to check the behaviour of containerd when an image pull gets interrupted/cancelled and restarted again almost immediately, which is the situation causing the race condition (#3787). The test could add some confidence that the change does not cause any regressions.

Let me know if you think that the test is irrelevant and needs to be removed.

@dmcgowan
Copy link
Member

dmcgowan commented Oct 2, 2024

Let me know if you think that the test is irrelevant and needs to be removed.

I think it should be removed. Its possible we can introduce a test in the future to trigger the condition more easily, but this test itself is relying on a race between context cancellation being handled before the pull is not completed. These sort of cases can cause flakiness in the CI environment which does not give us the same amount of concurrency that you would get on other machines.

@saketjajoo saketjajoo force-pushed the 3787-race-condition-fix branch from 1588b7a to 86a8655 Compare October 2, 2024 22:09
@k8s-ci-robot k8s-ci-robot added size/L and removed size/M labels Oct 2, 2024
When an upstream client (e.g. kubelet) stops or restarts, the CRI
connection to the containerd gets interrupted which is treated as a
cancellation of context which subsequently cancels an ongoing operation,
including an image pull. This generally gets followed by containerd's
GC routine that tries to delete the prepared snapshots for the image
layer(s) corresponding to the image in the pull operation that got
cancelled. However, if the upstream client immediately retries (or
starts a new) image pull operation, containerd initiates a new image
pull and starts unpacking the image layers into snapshots. This may
create a race condition: the GC routine (corresponding to the failed
image pull operation) trying to clean up the same snapshot that the new
image pull operation is preparing, thus leading to the "parent snapshot
does not exist: not found" error.

Race Condition Scenario:
Assume an image consisting of 2 layers (L1 and L2, L1 being the bottom
layer) that are supposed to get unpacked into snapshots S1 and S2
respectively.

During an image pull operation, containerd unpacks(L1) which involves
Stat()'ing the chainID. This Stat() fails as the chainID does not
exist and Prepare(L1) gets called. Once S1 gets prepared, containerd
processes L2 - unpack(L2) which again involves Stat()'ing the chainID
which fails as the chainID for S2 does not exist which results in the
call to Prepare(L2). However, if the image pull operation gets
cancelled before Prepare(L2) is called, then the GC routine tries to
clean up S1.

When the image pull operation is retried by the upstream client,
containerd follows the same series of operations. unpack(L1) gets
called which then calls Stat(chainID) for L1. However, this time,
Stat(L1) succedes as S1 already exists (from the previous image pull
operation) and thus containerd goes to the next iteration to
unpack(L2). Now, GC cleans up S1 and when Prepare(L2) gets called, it
returns back the "parent snapshot does not exist: not found" error.

Fix:
Removing the "Stat() + early return" fixes the race condition. Now
during the image pull operation corresponding to the client retry,
although the chainID (for L1) already exists, containerd does not
return early and goes on to Prepare(L1). Since L1 is already prepared,
it adds a new lease to S1 and then returns `ErrAlreadyExists`. This
new lease prevents GC from cleaning up S1 when containerd processes
L2 (unpack(L2) -> Prepare(L2)).

Fixes: containerd#3787

Signed-off-by: Saket Jajoo <[email protected]>
@saketjajoo saketjajoo force-pushed the 3787-race-condition-fix branch from 86a8655 to d7f8303 Compare October 2, 2024 22:10
@saketjajoo
Copy link
Contributor Author

I've removed the test.

@saketjajoo
Copy link
Contributor Author

/retest

@samuelkarp samuelkarp added this pull request to the merge queue Oct 2, 2024
Merged via the queue into containerd:main with commit e1006c0 Oct 3, 2024
52 checks passed
@samuelkarp
Copy link
Member

/cherrypick release/1.7

@k8s-infra-cherrypick-robot

@samuelkarp: new pull request created: #10763

In response to this:

/cherrypick release/1.7

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

@samuelkarp
Copy link
Member

/cherrypick release/1.6

@k8s-infra-cherrypick-robot

@samuelkarp: #10723 failed to apply on top of branch "release/1.6":

Applying: Fix the race condition during GC of snapshots when client retries
Using index info to reconstruct a base tree...
A	core/unpack/unpacker.go
Falling back to patching base and 3-way merge...
CONFLICT (modify/delete): core/unpack/unpacker.go deleted in HEAD and modified in Fix the race condition during GC of snapshots when client retries. Version Fix the race condition during GC of snapshots when client retries of core/unpack/unpacker.go left in tree.
error: Failed to merge in the changes.
hint: Use 'git am --show-current-patch=diff' to see the failed patch
Patch failed at 0001 Fix the race condition during GC of snapshots when client retries
When you have resolved this problem, run "git am --continue".
If you prefer to skip this patch, run "git am --skip" instead.
To restore the original branch and stop patching, run "git am --abort".

In response to this:

/cherrypick release/1.6

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

@samuelkarp samuelkarp added cherry-picked/1.6.x PR commits are cherry-picked into release/1.6 branch cherry-picked/1.7.x PR commits are cherry-picked into release/1.7 branch and removed cherry-pick/1.6.x Change to be cherry picked to release/1.6 branch cherry-pick/1.7.x Change to be cherry picked to release/1.7 branch labels Nov 1, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/runtime Runtime cherry-picked/1.6.x PR commits are cherry-picked into release/1.6 branch cherry-picked/1.7.x PR commits are cherry-picked into release/1.7 branch kind/bug ok-to-test size/XS
Projects
Archived in project
Development

Successfully merging this pull request may close these issues.

Snapshot gets deleted when an image is being pulled.
5 participants