Skip to content

Finish CDash report generation for spack install #7114

@tgamblin

Description

@tgamblin

We have the beginnings of CDash report generation spread over several PRs, but we haven't merged a complete implementation of it to the mainline. Finish this. There is currently JUnit logging, and we'd like this to build on the support that's already in place there.

Just as the JUnit logging support creates a testsuite with a testcase per build, we'd like the CDash support to have a BuildName per package install attempt (including recursive ones). The idea is to have a dashboard for each Spack release (that we're still testing) and another dashboard for continuous builds off of develop.

Different types of Spack packages have different sets of build phases. e.g., Package only supports an install() method, but AutotoolsPackage supports configure, build, and install. Ideally, there could be columns in the CDash interface for each of these, but we may need to combine some.

The phases of Spack builds that are probably important to show in CDash are:

  1. Concretize
    • Happens before the build begins. If there are concretization errors or Spack errors, we'd like that reported to CDash. Not currently sure where to put this in CDash format, or whether CDash has a good way to report errors that happen with the build tool itself.
  2. Fetch
    • Fetching the package archive, source repository, and resources. This could be shoehorned into the Update column.
  3. Configure or cmake
    • maps nicely to CDash configure phase
  4. Build
    • Spack's build and install functions map nicely to CDash's build phase
  5. Install
    • If build and install can be separated somehow, that would be nice, but it is not clear to me where this should go in a CDash build (is there some kind of deploy column?)
  6. Test (if --run-tests is supplied)
    • better test support is planned, but not yet implemented, so --run-tests happens as part of do_install()

If someone runs spack install trilinos or spack install xsdk, then this should concretize the trilinos spec, catch any errors, and if things are clean it should kick off builds for trilinos and all of its dependencies, generating reports (unique builds) for each of the dependencies. This is pretty much how the JUnit testing works, the CDash reporting should just be more thorough.

Related issues:

Metadata

Metadata

Assignees

Labels

testsGeneral test capability(ies)

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions