Conversation
albestro
left a comment
There was a problem hiding this comment.
I'm just wondering, how the mqtt5 conflicts, but more in general a "sub-package" availability is modelled?
I see
spack/var/spack/repos/builtin/packages/boost/package.py
Lines 156 to 157 in bef1397
and
spack/var/spack/repos/builtin/packages/boost/package.py
Lines 296 to 297 in bef1397
I would expect mqtt5 added somehow there too, no? am I missing something?
@hainest your input on this question would be valuable! I wasn't quite sure about the various (seemingly) ad-hoc conflicts and methods of enabling/disabling subpackages, so I only included the on that seems to be used the most, namely but I certainly wouldn't mind modeling that as a conflict instead. I just didn't want to start changing that for all the subpackages as that would be a significantly bigger change. Unrelated, but do the build failures here look familiar to anyone: https://gitlab.spack.io/spack/spack/-/jobs/16344209#L423? A number of packages are failing with
which sounds unrelated enough that it could be a failure that's already on develop, but I wouldn't be surprised if the new version of boost somehow triggers an error like that. |
Conflicts are ad-hoc right now. If you have a specific conflict you need to resolve, please submit an issue here and I'll see what I can do. I really need to get #30627 done. It's in the last 5% which is the longest part. |
That's good enough for now. I'm drastically changing variant modelling in #30627. I really need to finish that work. It's been lingering for too long.
Yes. It's a ridiculously bonkers big change that I've lost many hairs over.
That's running on a machine at UOregon. I have access to it, so I'll see if I can reproduce it. |
|
@spackbot re-run failed pipelines |
Ooh, awesome. I wasn't aware of that PR, and it's nice that you're working on it! In that case I'd definitely prefer to stick to the current suboptimal conflicts and let you handle it in #30627 😉 |
The CI jobs are using a compiler wrapper, so I can't really test this. I'll see if I can get the bot to re-run the jobs. |
|
@spackbot re-run pipeline |
|
I've rebased on latest develop. Apparently some/all the CI failures should be known failures that have been fixed (and the cray pipeline is known to be finicky...). |
* develop: (752 commits) mesa: add v23.3.3 and use py-packaging while python>=3.12 (spack#49121) gcc: add v15.1.0 (spack#50212) draco: add v7.20.0 (spack#49996) sgpp: update dependencies and variants (spack#49384) input_analysis.py: fix conditional requirements (spack#50194) boost: add 1.88.0 (spack#50158) mapl: add v2.55.1 (spack#50201) mepo: add v2.3.2 (spack#50202) py-repligit: add v0.1.1 (spack#50204) [package updates] Bump version of cp2k and sirius (spack#50141) petsc4py: update ldshared.patch for v3.20.1, and skip for v3.23.1+ (spack#50170) namd: add v3.0.1 (spack#50192) geomodel: depend on c (spack#49781) CompilerAdaptor: add support for opt_flags/debug_flags (spack#50126) Add ls alias to spack {compiler, external} (spack#50189) covfie: depend on c (spack#50190) lua-sol2: add v3.5.0 (spack#49970) crtm-fix: fix directory logic (spack#50172) py-build: add v1.2.2 (spack#50148) py-pillow: fix build (spack#50177) ...
* boost: add 1.88.0 * pika: add conflict with boost 1.88.0
No description provided.