Skip to content

fix: Don't needlessly invalidate cache layers#2197

Merged
polarathene merged 2 commits intodocker-mailserver:masterfrom
polarathene:fix/dockerfile-improve-layer-cache
Sep 19, 2021
Merged

fix: Don't needlessly invalidate cache layers#2197
polarathene merged 2 commits intodocker-mailserver:masterfrom
polarathene:fix/dockerfile-improve-layer-cache

Conversation

@polarathene
Copy link
Copy Markdown
Member

Description

Recent sedfile addition moved all scripts section earlier into the Dockerfile so that sedfile could be used within the Dockerfile.

However whenever a change is made to scripts which is most of the time for this project, building the Docker image for tests results in all layers after the scripts being invalidated, notably ClamAV, wasting storage of previous instances and increasing build time unnecessarily.

This isn't as noticeable of an issue via the CI as we don't leverage any caching at present there, but for iterating on a local branch and testing, it can be quite the drawback.

  • sedfile is handled early in the Dockerfile still, while the scripts have been moved as far down as it made sense to.
  • chmod was split out into it's own RUN command as again it's unnecessary for the rest of it's prior RUN command group to be invalidated.

Type of change

  • Improvement (non-breaking change that does improve existing functionality)

Checklist:

  • My code follows the style guidelines of this project
  • I have performed a self-review of my own code
  • New and existing unit tests pass locally with my changes

Recent `sedfile` addition moved all scripts section earlier into the Dockerfile so that `sedfile` could be used within the Dockerfile.

However whenever a change is made to scripts which is most of the time for this project, building the Docker image for tests results in all layers after the scripts being invalidated, notably ClamAV, wasting storage of previous instances and increasing build time unnecessarily.

This isn't as noticeable of an issue via the CI as we don't leverage any caching at present there, but for iterating on a local branch and testing, it can be quite the drawback.

- `sedfile` is handled early in the Dockerfile still, while the scripts have been moved as far down as it made sense to.
- `chmod` was split out into it's own RUN command as again it's unnecessary for the rest of it's prior RUN command group to be invalidated.
@NorseGaud
Copy link
Copy Markdown
Member

❤️ ❤️ ❤️ ❤️ ❤️ ❤️ ❤️ ❤️ ❤️ ❤️ ❤️ ❤️ ❤️ ❤️ ❤️ ❤️ ❤️ ❤️ ❤️ ❤️ ❤️ ❤️ ❤️ ❤️ ❤️ ❤️ ❤️ ❤️ ❤️ ❤️ ❤️ ❤️

@polarathene
Copy link
Copy Markdown
Member Author

@NorseGaud gripe for you too in your long-lived PR I take it? 😂

I've been using this uncommitted in my current PRs, much better experience 😛


For a CI improvement, I notice that building images is taking 15-20 minutes. I assume our tests are only running on the x86_64 image, so building the other images is probably wasteful in resources and time. We could instead delay building for other platforms until tests pass / finish?

This PR for example takes approx 50 minutes for the full test suite + CI builds to run. Tests time can be reduced via multiple job runners in parallel (might be complicated a bit by distribution of the testing image, I think we can use a similar technique that the doc previews use), I'd just need to look into what tests are expensive in time to offload to other jobs (eg the cipherlists).

@NorseGaud
Copy link
Copy Markdown
Member

@NorseGaud gripe for you too in your long-lived PR I take it? 😂

I've been using this uncommitted in my current PRs, much better experience 😛


For a CI improvement, I notice that building images is taking 15-20 minutes. I assume our tests are only running on the x86_64 image, so building the other images is probably wasteful in resources and time. We could instead delay building for other platforms until tests pass / finish?

This PR for example takes approx 50 minutes for the full test suite + CI builds to run. Tests time can be reduced via multiple job runners in parallel (might be complicated a bit by distribution of the testing image, I think we can use a similar technique that the doc previews use), I'd just need to look into what tests are expensive in time to offload to other jobs (eg the cipherlists).

Yeah 😄

Parallel testing would be a huge win. The tests could be grouped into phases (things that don't impact other things in one phase, etc) and then kick them off in a wrapping script.

@NorseGaud
Copy link
Copy Markdown
Member

I'd like to approve this PR but I'm seeing failures. Do we know why the actions are failing? Are they waiting on something else. I think merging without passing is a bad idea, even if the failing tests is a WIP. We might miss something critical and make it hard to fix later in the master branch.

@wernerfred
Copy link
Copy Markdown
Member

Should be the setup.sh issue again i think.
We could cherry pick the commit of upcoming v10.2.0 release and then the tests should pass again. Can be reverted before merging then as PR gets squashed.

+1 for your last thought. What speaks against merging the setup.sh changes again as we found out that it was an user issue?
cc @georglauterbach

@NorseGaud
Copy link
Copy Markdown
Member

Should be the setup.sh issue again i think.
We could cherry pick the commit of upcoming v10.2.0 release and then the tests should pass again. Can be reverted before merging then as PR gets squashed.

I think that's a great idea. master IMO, should always pass as it's branched from when preparing a new PR.

@polarathene
Copy link
Copy Markdown
Member Author

polarathene commented Sep 17, 2021

Yes it's just the two setup tests that are failing due to a partial revert (they don't look like meaningful tests to be concerned about though).

We were delaying merging the 10.2 release until #2196 was ready (for the mkcert.sh to be dropped). It's not a blocker though, so we could go ahead and follow up with a 10.3 afterwards. The other PRs preventing upgrade to Debian Bullseye have already been merged.

@polarathene
Copy link
Copy Markdown
Member Author

I'll merge this after #2196 is merged.

@georglauterbach
Copy link
Copy Markdown
Member

@NorseGaud @wernerfred you're right, I will take care of this. But I can tend to it today in the evening at the earliest.

@georglauterbach
Copy link
Copy Markdown
Member

@NorseGaud @wernerfred you're right, I will take care of this. But I can tend to it today in the evening at the earliest.

@wernerfred @NorseGaud I changed #2189 to be the PR that merges the new setup.sh again. Please review :)

@georglauterbach georglauterbach added area/ci kind/improvement Improve an existing feature, configuration file or the documentation pr/waiting for other PR to get merged This PR is waiting for another / other PR(s) to get merged labels Sep 19, 2021
@georglauterbach georglauterbach added this to the v10.2.0 milestone Sep 19, 2021
@polarathene polarathene enabled auto-merge (squash) September 19, 2021 12:35
@polarathene polarathene merged commit 4db546d into docker-mailserver:master Sep 19, 2021
@georglauterbach georglauterbach removed the pr/waiting for other PR to get merged This PR is waiting for another / other PR(s) to get merged label Sep 19, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

area/ci kind/improvement Improve an existing feature, configuration file or the documentation priority/medium

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants