Skip to content

spack ci: Refine job attribute configuration#32300

Closed
blue42u wants to merge 2 commits intospack:developfrom
blue42u:pr-gitlab-ci-all-runner-attributes
Closed

spack ci: Refine job attribute configuration#32300
blue42u wants to merge 2 commits intospack:developfrom
blue42u:pr-gitlab-ci-all-runner-attributes

Conversation

@blue42u
Copy link
Copy Markdown
Contributor

@blue42u blue42u commented Aug 21, 2022

The configuration possible in the gitlab-ci section is a subset of GitLab's CI pipeline schema. This is missing some critical features, including:

  • The workflow pipeline field cannot be set, making the generated pipeline unusable depending on the parent pipeline's workflow settings.
  • The cache field cannot be used for generated jobs, making it impossible to implement portable network I/O optimization (e.g. caching the Spack Git repo, or Local cache for buildcache files #32136).
  • The *script fields only accept a flat list of strings, however GitLab will flatten a nested list of strings. Without this feature common script fragments between build and service jobs can't be shared with YAML anchors.

In addition, the service-job-attributes configuration applies too broadly, in particular the no-specs-to-rebuild job is special in that it does not require a working Spack or caches. Removing these from the job description can save a significant amount of time.

This patch removes the schema limitations for job attributes allowing any keys to be inserted into the generated pipeline. These attributes can now be specified under the following keys:

  • gitlab-ci:pipeline-attributes for the pipeline as a whole,
  • gitlab-ci:build-job-attributes and gitlab-ci:mappings:job-attributes for build jobs,
  • gitlab-ci:cleanup-job-attributes for the cleanup job,
  • gitlab-ci:reindex-job-attributes for the reindex-cache job, and
  • gitlab-ci:noop-job-attributes for the no-specs-to-rebuild job.

The keys listed above are "merged" (in order) into the job/pipeline dict after it has been generated, where "merge" here overwrites any shared keys unless they are prefixed with +, in which case it recursively "merges" (for dict) or concatenates (for list). In short, merge operates as follows:

  • {key: [val1, val2]} + {key: [new3]} => {key: [new3]}
  • {key: [val1, val2]} + {+key: [new3]} => {key: [val1, val2, new3]}
  • {key: {key1: val1, key2: val2}} + {+key: {key1: new1, key3: new3}} => {key: {key1: new1, key2: val2, key3: new3}}

The previous schema still works (by internally emulating its effects), but will warn that it is a "legacy feature" and urge a transition to the newer format.

Documentation not yet included.

@spackbot-app spackbot-app bot added commands core PR affects Spack core functionality gitlab Issues related to gitlab integration tests General test capability(ies) labels Aug 21, 2022
@blue42u blue42u force-pushed the pr-gitlab-ci-all-runner-attributes branch 3 times, most recently from 59200ad to 690d307 Compare August 22, 2022 12:20
@blue42u blue42u force-pushed the pr-gitlab-ci-all-runner-attributes branch from 690d307 to c455445 Compare August 23, 2022 12:17
@blue42u blue42u force-pushed the pr-gitlab-ci-all-runner-attributes branch from c455445 to f9ebb8d Compare September 4, 2022 00:41
@blue42u blue42u force-pushed the pr-gitlab-ci-all-runner-attributes branch from f9ebb8d to b148d0b Compare September 29, 2022 05:00
@scheibelp
Copy link
Copy Markdown
Member

@scottwittenburg do you see any issues with allowing whatever runner attributes the user wants? This was discussed today and it was thought that gitlab will complain if invalid attributes are specified.

(to be clear, I'm not asking you to brainstorm possible cases where this could go wrong, more if you happen to remember if there was a case where this went wrong)

@scottwittenburg
Copy link
Copy Markdown
Contributor

@scheibelp In general I like the idea of allowing the user to specify any attributes they want. The problem I see is how to handle when spack ci generate tries to set the same attribute and how conflicts should be handled in those cases. I think if you are setting attributes for consumption by gitlab in the runner-attributes section, you will know what you are doing, and certainly gitlab will complain if you specify something invalid.

For example, we currently hard-code some retry behavior into the generated child jobs. I think if you try to specify retry yourself, we would currently just clobber that, and probably wouldn't even notice you had put something there.

@blue42u
Copy link
Copy Markdown
Contributor Author

blue42u commented Oct 13, 2022

@scottwittenburg Good catch, I didn't realize that the build jobs didn't copy all the attributes over. I'll work on that over the weekend.

I think the user should always be able to override the values from spack ci generate (at their own risk). This shouldn't be an issue for the static attributes like retry which don't change between jobs. The harder case is needs: which is generated dynamically...

So, proposal: the final job is generated by merging the generated job + build-job-attributes + mappings:job-attributes (new key for compatibility). Merging two dicts recursively merges dict values (just like GitLab CI's extends key), with the extension that keys can be prefixed by + to also concatenate lists (and error if the values aren't lists or dicts). For example, this configuration:

gitlab-ci:
  build-job-attributes:
    retry: false
    tags: [general]
    +needs:
    - custom-job
  mappings:
  - match: 'a'
    job-attributes:
      +tags: [specific]
  - match: 'b'
    job-attributes:
      tags: [unique]

Would result in a pipeline containing roughly:

(spec) a:
  ...
  retry: false
  needs:
  - (spec) a-dep1
  - (spec) a-dep2
  - custom-job
  tags: [general, specific]
(spec) b:
  ...
  retry: false
  needs:
  - (spec) b-dep1
  - custom-job
  tags: [unique]

@scottwittenburg
Copy link
Copy Markdown
Contributor

I think the user should always be able to override the values from spack ci generate (at their own risk).

I mostly agree with this. Some exceptions might remain, e.g. we decided the public, protected, and notary tags would be stripped from the user's job configuration and applied entirely at our discretion, for security reasons. (Note to self, remove useless public tags wherever they appear in .gitlab-ci.yml and the stacks).

I like the proposal to use + to distinguish between overriding and merging for lists and dicts. Out of curiosity, where did that come from?

@blue42u
Copy link
Copy Markdown
Contributor Author

blue42u commented Oct 15, 2022

I like the proposal to use + to distinguish between overriding and merging for lists and dicts. Out of curiosity, where did that come from?

I made that one up, it seemed appropriate and happens to be one of the few characters with no special meaning to YAML.

Some exceptions might remain, e.g. we decided the public, protected, and notary tags would be stripped from the user's job configuration and applied entirely at our discretion, for security reasons.

I'd prefer to have no or opt-in exceptions if possible, those tag adjustments will break a lot when using spack ci outside of Spack's CI. I haven't been hurt by them yet but the less surprises the better IMO. (Job tags also seem like a very weak security measure, if it's actually important there are far stronger options like GitLab protected variables/runners.)

@blue42u blue42u force-pushed the pr-gitlab-ci-all-runner-attributes branch from b148d0b to 30bae97 Compare October 16, 2022 05:47
@blue42u
Copy link
Copy Markdown
Contributor Author

blue42u commented Oct 16, 2022

Rewritten to implement the new design. It feels much cleaner this time around. I left the tag adjustments in to keep it consistent with the original code, I also kept some tests using the original format where it mattered.

OP updated with explanations for the new changes.

@blue42u blue42u force-pushed the pr-gitlab-ci-all-runner-attributes branch 3 times, most recently from 9471e75 to 8e3b656 Compare October 21, 2022 14:56
@blue42u
Copy link
Copy Markdown
Contributor Author

blue42u commented Oct 24, 2022

@scottwittenburg Does this version look good to you, or would you like further changes?

after_script = None
if "after_script" in runner_attribs:
after_script = [s for s in runner_attribs["after_script"]]
job_object["script"].insert(0, "cd {0}".format(concrete_env_dir))
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe I'm reading this wrong? It looks like it will update the outgoing job object with the command, regardless of whether the user supplied their own script. While before this, the job_script list that was updated here might have been completely overwritten with the users script list elements just below.

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

With this PR the order of operations is reversed, the whole job_object is generated first and then the user's modifications are applied afterwards. This ensures the user's configuration takes precedence in all cases and for all fields. It also keeps the modification order consistent and unsurprising (e.g. +script appends to the end of the default script, as one might expect). This pattern is repeated throughout the PR for the other job types as well.

https://github.com/spack/spack/pull/32300/files#diff-49f4a785e6a8295a5cd5af6a072cde881c00aecefe40fadf40abb33d5c35f6f5R1148-R1150

@scottwittenburg
Copy link
Copy Markdown
Contributor

Does this version look good to you, or would you like further changes?

After looking through it, I think it makes sense, thanks. I'd feel better if some rebuild jobs had run at any point in the past several pushes though. I looked through the last 5 pipelines on this PR, but could only find "no specs to rebuild" jobs.

I'll ask spackbot to rebuild everything, then cancel the child pipelines once a few stages have run.

@scottwittenburg
Copy link
Copy Markdown
Contributor

@spackbot rebuild everything

@spackbot-app
Copy link
Copy Markdown

spackbot-app bot commented Oct 24, 2022

I've started that pipeline for you!

@scottwittenburg
Copy link
Copy Markdown
Contributor

I'll ask spackbot to rebuild everything, then cancel the child pipelines once a few stages have run.

Actually, I decided to cancel all generation jobs but the build_systems one, let's see that go all the way through to the rebuild index job. Then I can run one more pipeline (without rebuild everything), which should result in a green check mark on gitlab again.

@blue42u
Copy link
Copy Markdown
Contributor Author

blue42u commented Oct 24, 2022

Interestingly enough, the check on GitHub reports the "Pipeline is running," but the pipeline itself has already completed. I assume that's a spackbot quirk?

@scottwittenburg
Copy link
Copy Markdown
Contributor

"Pipeline is running," but the pipeline itself has already completed. I assume that's a spackbot quirk?

It could be a bug in the bit responsible for updating the status, which is not spackbot, but similar. Or could it be related to me canceling most of the jobs in the pipeline? Pinging @zackgalbreath in case he may have some idea.

@scottwittenburg
Copy link
Copy Markdown
Contributor

Either way, it's great all the build_systems specs built and the final job ran fine too.

@scottwittenburg
Copy link
Copy Markdown
Contributor

@spackbot run pipeline

@spackbot-app
Copy link
Copy Markdown

spackbot-app bot commented Oct 24, 2022

I've started that pipeline for you!

@scottwittenburg
Copy link
Copy Markdown
Contributor

I think this needs a rebase to pick a develop that no longer has any x86_64_v4 targets in the stacks. That seems to be what caused the latest gitlab pipeline to fail.

@blue42u blue42u force-pushed the pr-gitlab-ci-all-runner-attributes branch from 8e3b656 to e3573de Compare October 25, 2022 15:42
@blue42u
Copy link
Copy Markdown
Contributor Author

blue42u commented Oct 25, 2022

Rebased, all the checks are green now (aside from codecov/patch)

@blue42u blue42u force-pushed the pr-gitlab-ci-all-runner-attributes branch from e3573de to 732cc3c Compare November 6, 2022 16:37
@blue42u blue42u force-pushed the pr-gitlab-ci-all-runner-attributes branch from 732cc3c to 583e651 Compare November 13, 2022 18:31
@blue42u blue42u force-pushed the pr-gitlab-ci-all-runner-attributes branch from 583e651 to fb1e906 Compare November 13, 2022 19:32
@kwryankrattiger
Copy link
Copy Markdown
Contributor

I am working on extending this and removing more boilerplate from the stack spack.yamls. I had some comments that may be relevant.

I initially liked the idea of the + prefix, but when looking at the other spack config files, particularly around setting up environment variables, using the keywords append/append_path, prepend/prepend_path, set, unset maybe better.

Pros:

  • More flexibility on how to override
  • More explicit on what it is doing

Cons:

  • Requires changing the specification for the gitlab-ci section
  • Adds a level of verbosity (idk if this is a con necessarily)

When working with the before_script/after_script attributes, I think it would be especially helpful to be able to specify putting the extra items before or after the existing script.

For the script section, I am not 100% sold on the idea that it should be modifiable per-job at all. I see the allure for completeness, but I think most things can be handled using variables and the before_script and after_script with configurable tools run in the script section.

@blue42u
Copy link
Copy Markdown
Contributor Author

blue42u commented Nov 17, 2022

@kwryankrattiger Thank you for the comments! There are a few points that I'm confused or don't agree on, questions/comments below.

I initially liked the idea of the + prefix, but when looking at the other spack config files, particularly around setting up environment variables, using the keywords append/append_path, prepend/prepend_path, set, unset maybe better.

I can see the allure for full control over how keys are merged, but I'm also having trouble coming up with examples where the extra features you've listed would be useful:

  • append_path/prepend_path I assume are for PATH-like variables. I don't recommend setting those in the variables: attribute, it would override the value provided by the container image which usually isn't what you want to do. It's usually better to set that up in before_script:.
  • append/prepend could be applied to strings (for concatenating environment variables), but IMO you could just as easily set VAR1/VAR2/VAR3 in different locations and then variables:VAR:$VAR1$VAR2$VAR3 (or a similar export command in before_script:). That would give significantly more control than just append/prepend would ever be able to give.
  • unset is equivalent to overriding with null (or {} or [] depending on the attribute). Aside from explicitness I don't see a reason to have a separate option for this.

When working with the before_script/after_script attributes, I think it would be especially helpful to be able to specify putting the extra items before or after the existing script.

This is the one example I could see using prepend for, but if you're using match_behavior: first (which all the stacks do currently), you could just as easily use a YAML anchor like so:

gitlab-ci:
  build-job-attributes:
    before_script: &main_before
    - echo 'main script!'
  mappings:
  - match: ['os=special']
    job-attributes:
      before_script:
      - echo 'before main script (special)!'
      - *main_before
  - match: ['os=otherspecial']
    job-attributes:
      before_script:
      - echo 'before main script (otherspecial)!'
      - *main_before

For match_behavior: merge it's more involved, one solution is to reorder the mappings: into the desired order:

gitlab-ci:
  match_behavior: merge
  mappings:
  - match: ['os=special']
    job-attributes:
      before_script:
      - echo 'before main script (special)!'
  - match: ['os=otherspecial']
    job-attributes:
      before_script:
      - echo 'before main script (otherspecial)!'
  - match: ['@:']
    job-attributes:
      +before_script:
      - echo 'main script!'

I'm not as opposed to prepend as the other suggestions, but AFAICT it won't make unmanagable configurations any more managable than before. At least the way it is now maintains a general top-to-bottom flow throughout the configuration, and is simple to explain in a few sentences.

For the script section, I am not 100% sold on the idea that it should be modifiable per-job at all. I see the allure for completeness, but I think most things can be handled using variables and the before_script and after_script with configurable tools run in the script section.

There are plenty of reasons for a configurable script, to list a few:

  • The signing job doesn't have a default script, so one needs to be provided there, always.
  • The default script for build jobs runs spack ci rebuild, for Spack's CI at the very least --backtrace is required to debug failures and the current stacks also redirect the output to log files.
  • The default script for build jobs also assumes Spack shell support is enabled, which may not be possible in a headless CI environment (I've run into hiccups before with it).

I also want to ensure Spack has as little knowledge of the GitLab CI specification as possible, so as little special handling for particular keys as possible. The reason for this PR in the first place was because there were attributes that I couldn't set without external scripts, keeping Spack fairly agnostic here makes it robust as new keys are added to the GitLab CI schema.

@kwryankrattiger
Copy link
Copy Markdown
Contributor

I am going to go backwards...

There are plenty of reasons for a configurable script, to list a few:

On the script stuff, I meant at the mapping level. Currently, it is allowed to change the script depending on the package, I think that is maybe not the best. But as I write this response I am feeling like not restricting it has the benefit of not making it special and therefore that means there is less nuance to reason around.

but AFAICT it won't make unmanagable configurations any more managable than before.

Correct. I was omitting the full scope of the change to come after this, it would probably be helpful to add some more context.

In order to remove most of the boiler plate from stacks I am working on introducing a new ci.yaml configuration. The idea is this will be loaded first, set up the defaults for all of the jobs, and then in the stack specific spack.yaml files overrides and modifications will be added to refine the things that make it different. Either this would be done on the command line spack ci generate --config ci.yaml or the gitlab-ci section will get include semantics gitlab-ci:include:[ci.yaml, macos-ci.yaml] and process the includes in order. I am leaning towards the include semantics since I think they may be easier to implement in code and consolidates the information about configuration to the spack.yaml rather than tenuously specified on the command line.

For match_behavior: merge it's more involved, one solution is to reorder the mappings: into the desired order:

Mappings in one thing that I am more unsure on. What you say is true. But what if at the top level CI config we ask for match_behavior:first but then at the stack level config we ask for match_behavior:merge. I think one way around this is the have super-groups for mapping that specify how to process that group, and then process them in list order.

mappings:
  - match_behavior: first
    matches:
    - match: [...]
      job-attributes: {...}
  - match_behavior: merge
    matches:
    - match: [...]
      job-attributes: {...}

I can see the allure for full control over how keys are merged, but I'm also having trouble coming up with examples where the extra features you've listed would be useful

My feeling was it would be beneficial to have ways to specify precedence in the world of layered configs, "this stacks configs are more important than the defaults vs. defaults first then mine. But reading your comments and thinking some more I agree that maybe for how CI is being used (or should be used) it may be sufficient to only support the "append"/"merge" behavior you have here. My goal is to be able to reuse the same merging logic for everything to prevent behavior that is difficult to reason about simply.


To continue to provide some more context, I am also looking at refining the idea of "scope" in the CI configurations. By this I mean attributes that apply to all of CI, attributes that apply to specific jobs types, and attributes that apply to specific package rebuild job specifications. Everything would get initialized with the top level attributes, then their job type attributes, and finally the package job attributes from mappings.

Attributes like tags and image could be applied universally from a higher scope, and then if a lower level scope had more specific details to add it can. The current state kind of has this idea of scope, but it isn't very consistently applied so it would be nice to have a more simple rule.


An example of how I think this could all look using your merge logic key prefixes, but I don't think it necessarily has to if everything is assumed to be append/insert for lists/maps, overwrite for values at the yaml level, and then use the prefix at the mappings level only.

General CI configs: ci.yaml

spack-ci:
  tags: ["spack"]
  before_script:
  - . "./share/spack/setup-env.sh"
  - spack --version

  rebuild-job-attributes:
    before_script: [...] # download make/print the arch/etc.
    script: [...]
  signing-job-attributes:
    script: [...]
    image: spack-signing-image:1234-56-78
  mapping:
  - match_behavior: first
    matches:
    - match:
        - package
      job-attributes:
          tags: ["huge"]
          variables:
             KUBERNETES_CPU_REQUEST: "11000m"
cdash: {...}

General Linux CI configs: ci-linux.yaml

include:
- ci.yaml
ci:
  # Linux only package mappings
  image: ghcr.io/spack/default-linux-ci-image:2022-11-04
  +before_script: [...] # print linux system information and stuff
  +mappings: [...]

Linux Architecture configs: ci-linux-x86_64.yaml

include:
- ci-linux.yaml
ci:
  +tags: ["x86_64"]
  +before_script: [...] # download architecture specific gmake

Stack specific configs: spack.yaml

spack:
  include:
  - ci-linux-x86_64.yaml

  view: false
  concretizer:
    reuse: false
    unify: false
  config: {...}
  specs: [...]
  ci:
   # The specific version/configuration of packages in this stack require
   # more/less resources than the default configurations
    match_behavior: merge
    mappings: [...]

@blue42u
Copy link
Copy Markdown
Contributor Author

blue42u commented Nov 18, 2022

Thanks for the extra context. I definitely would use a gitlab-ci:include key (or similar) myself!

I think trying to reuse the merge logic for the configuration will be too nuanced, there are too many fields that interact in non-trivial ways, for example variables set in the mappings controlling the actions of a shared before_script. If the configuration gets merged and then applied, I'm afraid I would have a hard time unraveling exactly what ends up in the generated jobs. (There are also some fields of gitlab-ci that IMHO should really use Spack's standard configuration scopes handling, like bootstrap and broken-specs-url.)

Would it suffice for your cases to "apply" the effects of each included configuration in turn onto the generated jobs? That is, if you have a case like so:

--- # spack.yaml
spack:
  gitlab-ci:
    include:
    - a.yaml
    - b.yaml
    mappings:
    - match: [package]
      job-attributes:
        +before_script:
        - echo main

... # a.yaml
gitlab-ci:
  build-job-attributes:
    before_script:
    - echo a

... # b.yaml
gitlab-ci:
  build-job-attributes:
    mappings:
    - match: ['os=ubuntu']
      job-attributes:
        before_script:
        - echo b

Then a package os=ubuntu build job would have a before_script with echo b and echo main (but not echo a). I think this would be more understandable and less nuanced, it's clear which parts are merged and in what order, and each individual configuration can be (more-or-less) self-contained.


Extending the above idea a bit to allow writing "mixins," I think there would need to be a bit more control over the order in which the configurations are applied. So, what about splitting the external configurations into a part that is apply-before and apply-after:

---  # ci-linux.yaml
apply-before:
  image: default-spack-linux-image:v1234
  build-job-attributes:
    +before_script: [...]  # Setup bits, print system info
apply-after:
  build-job-attributes:
    +before_script: [...]  # Necessary cleanup before the main script

And then an environment can include it, which applies multiple "mixins" like a stack:

spack:
  gitlab-ci:
    include:                      
    - mixin1.yaml 
    - mixin2.yaml
# Application order:
#   mixin1.yaml:apply-before
#   mixin2.yaml:apply-before
#   spack.yaml:gitlab-ci
#   mixin2.yaml:apply-after
#   mixin1.yaml:apply-after

Or (mutually exclusive) the exact order can be explicitly listed out:

spack:
  gitlab-ci:
    include-before:  # Application order:
    - mixin1.yaml    #   mixin1.yaml:apply-before
    - mixin2.yaml    #   mixin2.yaml:apply-before
    include-after:   #   spack.yaml:gitlab-ci
    - mixin1.yaml    #   mixin1.yaml:apply-after 
    - mixin3.yaml    #   mixin3.yaml:apply-after

I'm not entirely sure what to do with dependencies between mixins (e.g. if ci-linux-x86_64.yaml and ci-linux-aarch64.yaml both include: ci-linux.yaml, does ci-linux.yaml get applied twice?) The simple answer is "no dependencies" which encourages the "mixins" to be mostly independent, but I'm not confident that would be tractable in practice.

@kwryankrattiger
Copy link
Copy Markdown
Contributor

The way includes would work is all of spack.yaml is read, then the includes are applied in a depth first order. It should check if a file has already been included, and if so skip it.

In this example, spack.yaml is initialized to what it is, then mixin-config.yaml is applied, then mixin-1.yaml and finally mixin-2.yaml is applied.

graph TD
    A[spack.yaml]
    B[mixin-1.yaml]
    C[mixin-2.yaml]
    D[mixin-config.yaml]
    A --> B & C
    B --> D
    C --> D
Loading

But I don't think this is generally going to be an issue since I don't see the need to have diamond patterns in configs. The layers should be pretty linear for this type of thing.

graph TD
  A[ci-base]
  B[os-base]
  C[os-architecture]
  D[spack.yaml]
  D --> C
  C --> B
  B --> A
Loading

@kwryankrattiger
Copy link
Copy Markdown
Contributor

@blue42u I was looking into the config module a bit and I found spack.config.merge_yaml which seems to do the same thing that you have here with _merge_attributes but with a reversed concept of merge strategy. By default it will attempt to merge, but uses the :: on keys to denote override value.

This may be worth using.

@blue42u
Copy link
Copy Markdown
Contributor Author

blue42u commented Dec 8, 2022

The reason I avoided spack.config.merge_yaml is because it overrides whenever it can't merge. So if for instance one clause sets cache: to a dict (for a single cache key) and another appends to it with a list (for multiple keys), the latter clause will override instead of merge, which isn't what was wanted. The merge-prefix + errors in this case to let you know the configuration is faulty.

Is it critically important in practice? Probably not, but I like expressing intention and having sanity checks 😄

I could also see adding the explicit merge-fix to the generic Spack configuration format, something like:

key:: VALUE   # Always override key with VALUE
key+: VALUE   # Always merge VALUE with previous VALUE (error if not possible)
key: VALUE   # Merge if possible, override otherwise

@kwryankrattiger Think this would be a good direction?

@kwryankrattiger
Copy link
Copy Markdown
Contributor

I think that makes sense. I think to take it a step further with the +, instead of reporting an error all the time, if you have a scalar merging with a list, convert the scalar to a list and allow the merge. This is something that is kind of done in the config module for some fields, and I think it would be more convenient to handle this automatically with +.

--- # a.yaml
MERGE_KEY: VALUE
OVERRIDE_KEY: VALUE
KEY: VALUE

--- # b.yaml
MERGE_KEY+: [a, b, c]
OVERRIDE_KEY:: x
KEY: [d, e, f]

--- # ab.yaml
MERGE_KEY: [VALUE, a, b, c]
OVERRIDE_KEY:: x
KEY: [d, e, f]

@blue42u
Copy link
Copy Markdown
Contributor Author

blue42u commented Dec 20, 2022

Closing this implementation in favor of the more complete #34272

@blue42u blue42u closed this Dec 20, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

commands core PR affects Spack core functionality gitlab Issues related to gitlab integration tests General test capability(ies)

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants