Conversation
…gs I tried, since these specify exact condition ids - not very useful)
| % then the provider is the root package. | ||
| attr("root", PackageNode) :- attr("virtual_root", VirtualNode), provider(PackageNode, VirtualNode). | ||
|
|
||
| attr("virtual_root", VirtualNode) :- attr("root", PackageNode), provider(PackageNode, VirtualNode). |
There was a problem hiding this comment.
This isn't needed, but seems reasonable.
There was a problem hiding this comment.
Hmm, I need to think about that. virtual_root was used to signal that the root spec is a virtual. Not sure in which context this new rule is helpful 🤔
| error(100, "e2") :- provider(_, VirtualNode), not attr("virtual_node", VirtualNode). | ||
| error(100, "e3") :- provider(PackageNode, _), not attr("node", PackageNode). | ||
|
|
||
| error(100, "e4") :- attr("root", node(ID, PackageNode)), ID > min_dupe_id. |
There was a problem hiding this comment.
Introducing these error facts helped me figure out why
packages:
virtual-w:
require: cflags="-O1"
doesn't work (i.e. requirements from the virtual that do not explicitly mention an implementation): virtuals can't propagate most constraints to nodes.
|
The CI failure appears legitimate in the sense that it is cause by this PR, but I'll note that the associated I'm not sure what was happening before, but I think this file should be updated. |
|
@scheibelp Fwiw it also seemed a bug to me, but this is why we support that #38447 (comment) |
There seems to be some openness in that thread to changing the semantics. That is what I would support here. The most succinct way I can express my disagreement is to say that our modeling in is the cause of the CI failure (I have reproduced this in #40261): we are relying on obscure and hard-to-explain choices made by the concretizer to support this case (namely, the concretizer succeeds by avoiding the creation of a virtual node, not something I'd like to depend on). If we want to support it, we should support it in a way that makes sense. |
| activate_requirement(node(ID, Package), X), | ||
| imposed_constraint(Y,"node_flag_source", A1, A2, A3). | ||
| % None of the following 3 are needed for `virtual-w:require: w cflags="-O1"` | ||
| % (1) and (3) are needed for `y:require: cflags="-g"` |
There was a problem hiding this comment.
Regarding this and your point
That said, the mpi case works also with this patch - I'll try to understand why since the names of the virtual "mpi" and of the package "openmpi" don't match the pattern above
I will note that emit_facts_from_requirement_rules has differing behavior for virtuals vs. providers:
For y:
pkg_fact("y",trigger_id(0)).
pkg_fact("y",trigger_msg("y cflags=\"-g\"")).
condition_requirement(0,"node","y").
condition_requirement(0,"node_flag","y","cflags","-g").
condition_requirement(0,"node_flag_source","y","cflags","y").
For w:
pkg_fact("virtual-w",trigger_id(0)).
pkg_fact("virtual-w",trigger_msg("virtual-w")).
condition_requirement(0,"virtual_node","virtual-w").
the rule is easily satisfied for virtuals, but for non-virtuals the "condition" is equal to the constraint, I think that's to support groups, but IMO it ought to be refactored into something sensible.
There was a problem hiding this comment.
Update: none of these are required if using:
{ impose(EffectID, node(ID, Package)) } :-
pkg_fact(Package, condition_effect(Y, EffectID)),
requirement_group_member(Y, Package, X),
activate_requirement(node(ID, Package), X),
requirement_group(Package, X).
(added above)
| for x in specs: | ||
| if not x.virtual: | ||
| self._possible_virtuals.update(str(i) for i in x.package_class.provided.keys()) |
There was a problem hiding this comment.
Ok, this would be part of the changes in semantic mentioned above. Not sure why we would need it though 🤔
| 1 { attr("virtual_node", node(0..X-1, Virtual)) : max_dupes(Virtual, X) } :- | ||
| virtual_condition_holds(node(ProviderID, Provider), Virtual), | ||
| attr("node", node(ProviderID, Provider)). |
There was a problem hiding this comment.
This will not blend with the semantic we want to merge in #35322. If a condition to provide a virtual holds, it is not necessary for the node to provide the virtual for the virtual node to be present.
There was a problem hiding this comment.
Two points:
- This only says that if X provides a virtual Y, that Y must exist as a virtual. It does not say that X must be the provider for Y. Therefore I don't think it actually conflicts (at least with the high-level goal stated in Cherry-picking virtual dependencies #35322)
- Without this, note that
packages:<virtual>:requirecannot guarantee imposing requirements on providers
packages:
mpi:
require: openmpi +shared
i.e. this can only place constraints on MPI right now when MPI is a (virtual) node in the DAG. Another option is to take constraints like require: openmpi+shared and generate a secondary requirement for the provider (whether or not it is providing something).
There was a problem hiding this comment.
This only says that if X provides a virtual Y, that Y must exist as a virtual.
I don't read it exactly like that, but rather: if X satisfies the conditions to provide the virtual Y, then Y must exist as a virtual. This is not always true. For instance, if in a solve intel-parallel-studio is used only for mpi there does not need to be a lapack virtual node in the answer set.
There was a problem hiding this comment.
I don't read it exactly like that, but rather: if X satisfies the conditions to provide the virtual Y, then Y must exist as a virtual.
Agree.
if in a solve intel-parallel-studio is used only for mpi there does not need to be a lapack virtual node in the answer set.
What do you think should happen for something like:
packages:
lapack:
require:
- one_of: [openblas, netlib-lapack]
...
spack spec intel-parallel-studio+mkl
?
(I don't think this is a counter example to your claim, but I am curious about the desired behavior).
Overall, it sounds like you'd prefer the second strategy I described in #39880 (comment) (namely, infer provider-specific requirements as provider-specific requirement groups).
There was a problem hiding this comment.
What do you think should happen for something like: ...
I think it should install, since we asked for the intel-parallel-studio+mkl package explicitly.
|
|
||
| requirement_group_satisfied(node(ID, Package), X) :- | ||
| 1 { condition_holds(Y, node(ID, Package)) : requirement_group_member(Y, Package, X) } 1, | ||
| 1 { requirement_condition_satisfied(Y, node(ID, Package)) : requirement_group_member(Y, Package, X) } 1, |
There was a problem hiding this comment.
Just a reminder that we need to make "any_of" symmetric too, in case.
| { impose(EffectID, node(ID, Package)) } :- | ||
| pkg_fact(Package, condition_effect(Y, EffectID)), | ||
| requirement_group_member(Y, Package, X), | ||
| activate_requirement(node(ID, Package), X), | ||
| requirement_group(Package, X). | ||
|
|
||
| { impose(EffectID, node(ProviderID, Provider)) } :- | ||
| pkg_fact(Provider, condition_effect(Y, EffectID)), | ||
| provider(node(ProviderID, Provider), node(VirtualID, Virtual)), | ||
| requirement_group_member(Y, Virtual, X), | ||
| activate_requirement(node(VirtualID, Virtual), X), | ||
| requirement_group(Virtual, X). |
There was a problem hiding this comment.
I'll look at this better, but I don't immediately see why we can give a choice to impose something without having anything that says that the corresponding trigger condition is satisfied in the body of the rule.
The only explanation I see is that rules in requirements are always of the kind:
%----------------------------------------------------------------------------
% Trigger conditions
%----------------------------------------------------------------------------
pkg_fact("openmpi",trigger_id(0)).
pkg_fact("openmpi",trigger_msg("openmpi%gcc+static")).
condition_requirement(0,"node","openmpi").
condition_requirement(0,"variant_value","openmpi","static","True").
condition_requirement(0,"node_compiler","openmpi","gcc").
condition_requirement(0,"node_compiler_version_satisfies","openmpi","gcc",":").
%----------------------------------------------------------------------------
% Imposed requirements
%----------------------------------------------------------------------------
pkg_fact("openmpi",effect_id(0)).
pkg_fact("openmpi",effect_msg("openmpi%gcc+static")).
imposed_constraint(0,"node","openmpi").
imposed_constraint(0,"variant_set","openmpi","static","True").
imposed_constraint(0,"node_compiler_set","openmpi","gcc").
imposed_constraint(0,"node_compiler_version_satisfies","openmpi","gcc",":").
but that also means they work logically backwards with this choice - we give choice to impose the rules because they are designed in such a way that imposing the effect implies the causes.
There was a problem hiding this comment.
I don't immediately see why we can give a choice to impose something without having anything that says that the corresponding trigger condition is satisfied in the body of the rule.
In this case there is no "valid" corresponding trigger condition:
- the condition requirements generated (the ones you list in the example directly above) are not actually for triggering the impose, but rather to record whether requirement group members are satisfied (technically they also trigger the impose, but they are effectively useless for that purpose)
- currently the only "influence" that the requirement group creates are error conditions when their members are not satisfied
- "somehow", we have a means to thwart error facts by deriving attributes for all requirements other than cflags
- IMO, it is not worth understanding how or why this works, and saying that we are allowed to impose things if anyone might want to impose them is reasonable
|
This has been partially covered by #40639. Although it does more than that PR, I figure this should be closed until it can be connected to a wider issue. |
Fixes #39671
It appears an extra layer of indirection has been added in the form of
condition_effectandcondition_triggersince the last time I tried to understand the concretizer (between the requirement group member and the imposed constraint).I observed in #39671 that our basic test was passing even though a simple example outside of the tests was failing, and I can say that is because:
I think we should use a global ID numbering system since I think all sorts of strange bugs could result from this.