Skip to content

Conversation

@dcantah
Copy link
Member

@dcantah dcantah commented Oct 21, 2020

For LCOW currently we copy (or create) the scratch.vhdx for every single snapshot
so there ends up being a sandbox.vhdx in every directory seemingly unnecessarily. With the default scratch
size of 20GB the size on disk is about 17MB so there's a 17MB overhead per layer plus the time to copy the
file with every snapshot. Only the final sandbox.vhdx is actually used so this would be a nice little
optimization.

For WCOW we essentially do the exact same except copy the blank vhdx from the base layer.

Signed-off-by: Daniel Canter [email protected]

@k8s-ci-robot
Copy link

Hi @dcantah. Thanks for your PR.

I'm waiting for a containerd member to verify that this patch is reasonable to test. If it is, they should reply with /ok-to-test on its own line. Until that is done, I will not automatically test new commits in this PR, but the usual testing commands by org members will still work. Regular contributors should join the org to skip this step.

Once the patch is verified, the new status will be reflected by the ok-to-test label.

I understand the commands that are listed here.

Details

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@dcantah
Copy link
Member Author

dcantah commented Oct 21, 2020

I'd like to get feedback on this PR as assuming the key format will stay the way it is to solve this is a gross hack but I can't think of another way to handle this without bringing labels into the equation.

cc @katiewasnothere @ambarve @kevpar

@dcantah dcantah force-pushed the feedback-lcow-snapshotter branch from c715bb4 to bfce5c9 Compare October 21, 2020 02:01
@theopenlab-ci
Copy link

theopenlab-ci bot commented Oct 21, 2020

Build succeeded.

@cpuguy83 cpuguy83 requested a review from jterry75 November 5, 2020 20:15
@cpuguy83
Copy link
Member

cpuguy83 commented Nov 5, 2020

ping @kevpar

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@dmcgowan - Can we get this to be a define somewhere to make this less fragile?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Talked with @dcantah about adding a snapshot label when the active snapshot is just for extraction.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Correct, I didn't get around to changing this PR yesterday but another approach to this is we could have a WithScratchSnapshot (or whatever we want to call it) snapshotter option we can pass in for the Prepare call for the r/w sandbox.vhdx snapshot that would just set a label we can check in the snapshotter to determine whether to make the scratch instead. This seems a little better than relying on the "extract-" key format staying static

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ok after implementing it, I don't like the label approach either... Here's a branch with the work dcantah@d570e89 but let's see what @dmcgowan says about this approach first.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@dmcgowan / @crosbymichael - Do either of you have an opinion on this or should we just stick with the key approach here? Thanks for the insights!

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't have an opinion on it. I think we have done keys like this is the past so seems ok to me

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks. SGTM too

@jterry75
Copy link
Contributor

jterry75 commented Nov 6, 2020

@dcantah - Thanks for doing this!

@dcantah
Copy link
Member Author

dcantah commented Nov 6, 2020

@jterry75 For sure :) We can benefit from this for WCOW also as the exact same thing ends up happening past the base layer. I'll likely add that also today

@crosbymichael
Copy link
Member

@cpuguy83 do you wanna review this one?

@dcantah dcantah force-pushed the feedback-lcow-snapshotter branch from bfce5c9 to dfd9f9f Compare November 30, 2020 23:27
@theopenlab-ci
Copy link

theopenlab-ci bot commented Nov 30, 2020

Build succeeded.

Copy link
Contributor

@jterry75 jterry75 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

…n the final snapshot

For LCOW currently we copy (or create) the scratch.vhdx for every single snapshot
so there ends up being a sandbox.vhdx in every directory seemingly unnecessarily. With the default scratch
size of 20GB the size on disk is about 17MB so there's a 17MB overhead per layer plus the time to copy the
file with every snapshot. Only the final sandbox.vhdx is actually used so this would be a nice little
optimization.

For WCOW we essentially do the exact same except copy the blank vhdx from the base layer.

Signed-off-by: Daniel Canter <[email protected]>
@dcantah dcantah force-pushed the feedback-lcow-snapshotter branch from dfd9f9f to a91c298 Compare December 1, 2020 00:26
@dcantah dcantah changed the title Optimize LCOW snapshotter to only copy scratch vhd on the final snapshot Optimize Windows and LCOW snapshotters to only create scratch layer on the final snapshot Dec 1, 2020
@theopenlab-ci
Copy link

theopenlab-ci bot commented Dec 1, 2020

Build succeeded.

Copy link
Contributor

@jterry75 jterry75 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@crosbymichael
Copy link
Member

LGTM

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants