Maintain a view for an environment#10017
Conversation
|
I really want this (as I want to switch my private python stack to being spack managed), but my usecase requires that all specs in an environment need to be concretized together, otherwise I'll end up with multiple versions of python, which will be incompatible with each other. I still didn't get @citibeth's argument for having multiple versions of a library in an environment, so I'll just assume that it is a valid use case, nevertheless there should be the option to say something like |
|
@healther: I'm in favor of eventually making views concretize all together. The only reason as far as I'm concerned we don't do it right now is that the concretizer doesn't support it well. |
|
@tgamblin Do you have a timeframe in mind when the new concretiser will be available? From a conceptual point, isn't it "just" to treat an environment as an ad-hoc package? Or is it more complicated than that? |
|
@healther: I'd rather have a reasonable guarantee that we won't run into conflicts, as well as good error messages, before moving to something like that. At the moment we could do what you suggest but I do not think we'd be able to tell users something intelligent about their choices. |
|
@tgamblin Yeah fair enough. It was more a "did I miss something fundamentally" than a "this is the correct thing" :) Do you have an idea on the time frame of the new concretizer? |
|
Andreas,
On Wed, Dec 5, 2018 at 5:18 AM Andreas Baumbach ***@***.***> wrote:
I really want this (as I want to switch my private python stack to being
spack managed), but my usecase requires that all specs in an environment
need to be concretized together, otherwise I'll end up with multiple
versions of python, which will be incompatible with each other.
I still didn't get @citibeth <https://github.com/citibeth>'s argument for
having multiple versions of a library in an environment, so I'll just
assume that it is a valid use case, nevertheless there should be the option
to say something like spack env concretize --unique, which would
essentially be the same as creating a superpackage with all dependencies
OK, here's a really simple argument. You need to run two applications as
part of your environment. The two applications require different versions
or options on some underlying library. If/when that is the case, it is not
(currently) possible to concretize the two together.
I really want this (as I want to switch my private python stack to being
spack managed), but my usecase requires that all specs in an environment
need to be concretized together, otherwise I'll end up with multiple
versions of python, which will be incompatible with each other.
I hear that you really wan this. But if you want to switch your Python
stack to Spack, you can do that today, as there are other ways to ensure
that the same version of Python is used everywhere. You can look at how
I do it:
I created an environment called *twoway-dev-gibbs:*
*https://github.com/citibeth/spack/blob/efischer/giss/var/spack/environments/twoway-dev-gibbs/env.yaml
<https://github.com/citibeth/spack/blob/efischer/giss/var/spack/environments/twoway-dev-gibbs/env.yaml>*
Note that this environment imports a number of Spack configs. See this one:
https://github.com/citibeth/spack/blob/efischer/giss/var/spack/environments/configs/gissversions/packages.yaml
In that config, I specify Python 3.5.2. And it works... every single spec
I concretize comes with Python 3.5.2:
https://github.com/citibeth/spack/blob/efischer/giss/var/spack/environments/configs/gissversions/packages.yaml
I'd also say that even with things the way they are, not many
concretizations are necessary to get what you need. In
*entgvsd-dev-gibbs* environment
get about 70 packages from just:
py-giss:
py-ply:
py-psutil:
(In the environment i mentioend previously, I get all 70 Python packages
from ZERO explicitly listed Pythons in the environment; because they all
come in as dependencies for something else.) Hope this helps. Sorry it's
not doing it the way you are looking for; but I'm here to help out with the
infrastructure we have now, if you want to give it a try.
…-- Elizabeth
|
Sure, but you end up in a, at least potentially, broken environment, because there is no way to define the precedence of the library versions. I agree that there are cases that spack is not able to handle (yet), but that doesn't make your environment anymore defined than doing
It really is just a convenience thing for me. I'd like to only use one package manager, whereas right now I have a combination of
That works, because non of your specs has a As I said, my comments are more a reminder that this is still a wanted feature, but you shouldn't take it as a "this is blocking", in that case I'd be asking what I can do to fix it :) |
|
@citibeth: I think this boils down to the difference between an environment intended for running code vs. one intended for building code. We're not going to deal with that here, but I have ideas for how we could. I actually think both of these design points are possible, maybe with a little config. But again -- this is just a first cut. This PR just adds a view, with some precedence rules. |
|
On Tue, Dec 11, 2018 at 5:07 AM Andreas Baumbach ***@***.***> wrote:
OK, here's a really simple argument. You need to run two applications as
part of your environment. The two applications require different versions
or options on some underlying library. If/when that is the case, it is not
(currently) possible to concretize the two together.
Sure, but you end up in a, at least potentially, broken environment,
because there is no way to define the precedence of the library versions.
Spack environments defines a precedence when there are conflicts. (I
forget which precedence we settled on in the end; either the one listed
first in the environment, or the one listed last, takes precedence).
Depending on the situation, you may or may not consider this environment to
be "broken." Because of RPATH and similar mechanisms, many packages can
find their stuff reliably, no matter how your env vars are set. In that
case, everything will work.
I agree that there are cases that spack is not able to handle (yet), but
that doesn't make your environment anymore defined than doing spack load
-r app1 app2 would.
Putting aside the issue that `spack load` is nondeterministic and Spack
Environments are.... I suppose I would agree with you. If something in
app1 conflicts with something in app2, then one of them will take
precedence.
In that config, I specify Python 3.5.2. And it works... every single spec
That works, because non of your specs has a python@:2.8 dependency ;)
otherwise you would end up with both and without a warning. My problem
stemmed from py-jupyter-notebook's node-js dependency and I was on a
branch that doesn't yet relies on the system-python.
I agree, that's tricky nasty stuff. I might try specifying ^[email protected] for
the thing(s) that need it. I usually have problems when a build tool I
need for one package requires Python2. Then the build dependency for
Python2 messes up with my run dependency for Python3. In the past I've
gotten around this by hacking the packages. Maybe this will help you?
#7926
Or, I might just create two separate Spack Environments... one with 2.8 and
one with 3.x. There's nothing stopping you from loading the modules of
both environments together. (But again... if PYTHONPATH is used, Python
itself doesn't know how to simultaneously maintain one PYTHONPATH for its
Python2 stuff and one for its Python3 sutff. I don't see how to get around
this easily, with or without Spack).
@citibeth <https://github.com/citibeth>: I think this boils down to the
difference between an environment intended for running code vs. one
intended for building code. We're not going to deal with that here, but I
have ideas for how we could. I actually think both of these design points
are possible, maybe with a little config.
I don't believe this is a meaningful distinction:
1. An environment meant for building code is already constructed by Spack,
implicitly, every time it builds a package. The most reliable way to gain
access to and use that environment is to ask Spack to assist in building
your stuff. That is what Spack Setup does! But I realize Spack Setup is
not available for all packages.
2. It is rare that somebody (in HPC land at least) needs to build code but
not run anything. Usually running the code you built requires running
other packages as well to prepare inputs, interpret outputs, etc. It would
be cumbersome to have one Spack Environment required for typing "make" and
another required for testing the program you just built. Moreover, many
build processes, especially Fortran stuff, requires some custom
preprocessor based on Python or whatnot. So again, building stuff requires
that you run stuff.
…-- Elizabeth
|
becker33
left a comment
There was a problem hiding this comment.
Three bugs in the behavior of this PR
- When in an environment,
spack uninstallshould remove the package from the view. - When in an environment,
spack remove foo; spack concretizeshould removefoofrom the view - If
foois installed in Spack outside of an environment,spack add foo; spack concretizeshould addfooto the view.
All of these are examples of the general principle that the packages in the view should be identical to the packages listed under \d installed package(s) by the output of spack -e ENV find.
After install, uninstall, and concretize commands, we need to check the specs in the environment against the specs in the view, and add/remove from the view until they match.
Also, we may want to consider not adding deps to the view. I think the view should only contain the root specs. If you care about a non-root spec, make it a root. I both think that's the behavior we want, and that obviates the need for some of the ordering logic you go through here.
|
[Sorry I didn't realize this was a PR thread, not an email thread] Thoughts on the PR itself...
Not sure what I think of this. Spack generally provides two way to enable a bunch of packages: (a) load modules, and (b) set up a view. There are pros and cons to these approaches, which I will try to list below. At this point in time, I don't think that Spack should prefer one over the other. Pros/Cons:
What does Therefore, the view is not an intrinsic part of the environment, and shouldn't be included as such.
Since views and module scripts are semantically equivalent, they need to work the same in Spack. Syntax for how they are accessed / created has historically been different because the features were developed separately by separate people, and nobody unified them. With the advent of environments, that needs to change. But it does bring up some tricky issues... because although they should work the same from the user's point of view, they aren't generated the same underneath. Views are large and are created bit by bit, package by package; whereas module scripts are generated all-at-once based on a list of packages. Therefore, incremental changes are easier for views whereas globally regenerating is easier for module scripts. We should keep this in mind and implement things thoughtfully, but not burden the user with the difference. Currently, the The other option would be to implement views as modules are currently done: you build your environment, then you run a In any case, views and module scripts need to work the same. PRs should help them converge, not diverge. Therefore, this PR will probably need to pay some attention to module scripts as well as views.
The QUESTION: Is there any value in generating more than one view from the same environment? I think probably not.
The prioritization was already established in the previous environments PR and the module load script. Unless there's a serious problem with what we already have, prioritization needs to be the same when using views as when using modules. |
citibeth
left a comment
There was a problem hiding this comment.
-
We need a generic term for "view or module script". Maybe "loader"? I'll go with that for now, but hopefully we'll think of something better.
-
It should be possible to specify, build and use environments without putting details of the preferred loader in the environment specification. (I suppose that if users WANT to put preferred loaders in their env spec, we shouldn't stop them).
-
Develop a syntax for this PR that is agnostic to the kind of loader you intend to use. I would suggest something allowing people to add as many loaders as they like to a single environment instance, something like:
spack env create
spack install X
spack env loader --add view # Add a view in the default location in the *environments/* directory
spack env loader --add module # Add a module / module load script in the *environments/* directory
spack env loader --add view /my/path # Adds a second view in /my/path
Now every time you add/remove from the environment, it will add/remove from two views and one module load script. Sysadmins might appreciate this ability to simultaneously generate views and scripts, based on the preferences of different users.
-
Views and module scripts need to be semantically equivalent. Every feature needs to work the same way for both of them. Unit tests should be developed to ensure this.
-
I believe that
spack env activatedoesn't just set an idea of a "current" environment; it also loads the environment, at least as it's already been built? Does it do this by loading env vars similar to how it would work with modules? I don't know... but this looks like an area ripe for re-do, if we have more formalized notions of views / module scripts coming up. If you have a view enabled, should it work by setting the env vars to the view. If you have multiple loaders enabled, which one should it use to activate? (Probably the first one added). -
A view for a particular environment should be well-defined... by what you get if you build the environment, then create the view. In this system, there are many paths by which you could create a view. For example, you could create an environment, add 3 packages, enable the view, remove one of the packages previously added, and then add two more. Both cases should / must end up with exactly the same result in the resulting view, anything else would be a bug. Unit tests need to be added to ensure that a variety of paths result in exactly the same view. If we can't find a way to reliably do that, we should go to the (less convenient) procedure of building the environment, then generating a view from it as a snapshot of that environment at that point in time.
That seems agreeable and also readily doable.
If this PR was merged as-is, it would add functionality for maintaining views in an automated manner. This isn't available yet for modules, so the type of divergence is less problematic (compared for example to your point about precedence). I think offering automatic management of a module file can be handled later.
This could be moved into a separate config file: I do see that how the environment is exposed to one user may not make sense to place in the
I think that one consistency guarantee is that if a user copies an environment and creates a view from it, and then regenerates the view in the original environment from scratch, that they should be consistent; this PR does do that. In this PR, environment views are automatically updated with |
There are two issues here: (1) the proper syntax for these commands, and (2) implementing the functionality behind the syntax.
I'd be happy with just specifying on the command line whether you want a view or module script, etc. That can be accommodated without removing the functionality from
I believe that the default should be no view or module script generation. Some people won't want views, and it seems counter-intuitive to have to say more to get Spack to do less.
I would look at the bug reports already listed, and consider adding unit tests that would have caught those bugs.
My thoughts on this issue:
|
Our experience tells us that having 100s of directories in $PATH makes things like tab-completion painfully slow (in fact that was the original motivation to implement this functionality).
I wasn't aware of that. Though I'd argue, that it is only barely more deterministic than
There are use cases, i.e. when you use spack to provision a software environment for other people to work against. We do not necessarily want to force people to develop in our environment.
I don't see them as such opposites. Really for us views are a way to reduce the size of the env variables that need to be set! In essence we provide a module for each view, and each view essentially supplements the structure of Out of curiosity: Do you see an advantage of not using a view? Except inode usage, which could be fixed by using hard links (except for git, because it already uses O(100) hard links in its own prefix...)
I agree, but I don't see how this behaviour is compatible with the "copy the definition file and you get the same result" philosophy. If the order of additions and deletions of specs is important, then how do we define the resulting environment consistently? |
|
Our experience tells us that having 100s of directories in $PATH makes
things like tab-completion painfully slow (in fact that was the original
motivation to implement this functionality).
Thanks for the info! I noticed that loading 100 modules is slow; but I
don't think I noticed the tab completion issue, likely because I don't
usually use that shell feature. There might be a difference between shells
on this one; which shell are you using?
Spack environments defines a precedence when there are conflicts.
I wasn't aware of that. Though I'd argue, that it is only barely more
deterministic than spack load in the sense, that yes, given a close study
of the documentation I now may influence the order. But for a naive user,
there can still be multiple versions of a package (which can bite you,
especially with python packages). Again I'm not arguing, that this is
necessarily the only sane use case but it is *also* a sane use case and
should be "detectable"
Two thoughts:
1. It looks like some kind of warning needs to be made when overrides
happen. And the warning needs to differ when an explicit package (one
mentioned explicitly in the env.yaml file) vs. implicit package gets
overridden.
2. It has long been my belief that Spack works best when you ask it to
concretize an environment, check over the environment produced, and then
make changes to the various configurations / specs until you like what you
see. I get the sense that a lot of people don't do this. But it's
something I think we should encourage.
It is rare that somebody (in HPC land at least) needs to build code but
not run anything.
There are use cases, i.e. when you use spack to provision a software
environment for other people to work against. We do not necessarily want to
force people to develop in our environment.
In this case, the Spack person is creating an environment that will be used *by
its users* to build and run software; so I don't think it's a
counter-example to what I said.
Spack generally provides two way to enable a bunch of packages: (a) load
modules, and (b) set up a vew.
I don't see them as such opposites. Really for us views are a way to
reduce the size of the env variables that need to be set! In essence we
provide a module for each view, and each view essentially supplements the
structure of / and in fact will be overlayed to / in the container
provisioning.
That's a great point. It looks like Spack Views would be best if Spack
auto-generated a module to load the view. @scheibelp is this something
that could be done easily?
|
My personal experience was with bash, but at least zsh is similar. The problem isn't really the shell, but the filesystem, the shell has to go through all PATH directories in order find all possible completions (or you use caching and run in the next source of trouble). For us a slow NFS system made the problems more pronounced, I'd suspect that my MacBook's SSD would cope better with that
That's essentially what I'm saying!
Expecting expertise from user is always a tricky thing. In my experience if you have expectations users will find a way to disappoint you :) And that's not a criticism of users. The whole point of using something like spack is not to have to take care about this yourself. Of course in practice there are limits how user-proof a tool can become. But it is not unreasonable to expect a tool (especially one written in python) to work "as I intended" if it didn't complain.
It is somewhat. Because it is different software and it may very well be that the environment to build our software stack is vastly different from the environment a user of that software wants to build his/her project in. There is a difference between a development environment for a package (which spack should also absolutely support) and a development environment of a piece of software that depends on said package! |
|
Think about for example boost. I'd like to be able to go to spack and tell
it: I want to have an environment in which to develop boost. That means a)
I want to have all dependencies installed, b) I want the sources of boost
available and c) ideally I'd also want to be able to have spack invoke the
build system. But that is fundamentally different from using spack to
provide the boost library to users to develop their own software against
That's what Spack Setup does.
|
I made the changes, but want to reserve the right to request additional changes.
|
Maybe I can answer some questions and add some clarity to this discussion. On @citibeth's main point that views and modules should be equivalent and that people should be able to choose a loader -- I like the gist of this but the truth is they're not equivalent. Some reasons:
That said, modules are not going away -- they're used all over in HPC and we're planning to keep them -- we have to support them. They are what HPC users are used to, and they are how Spack will continue to expose packages on clusters for the foreseeable future. But, virtual environments (with views) are an interface that can work anywhere, be more consistent, and doesn't have to rely on the module system. They're a chance to do something better. I would like it if eventually you could pass around On specific technical points:
I think that covers most of the discussion -- I hope that is ok with people. @becker33 is working on some finishing touches for this PR. On build vs. run environments (this is future stuff):
Finally, note that @becker33 is working on "spack stacks", which will basically be like an environment (in terms of workflow) but aimed at facility deployment. In a stack, you'll be able to have per-environment module trees as well as combinatorial views (see #9679). The idea there is to make deploying an entire stack for a cluster as easy as making a slightly wordier |
I meant they both do the same thing in the end: take a bunch of things you've built, and load them all up together for use in a shell. My observation was that if two things accomplish the same goal, we should try to have the same UI to access both of them. (That's not to say the two things are the same in every way; if they were, we would only have one of them).
I am frankly mystified why we would want to (or be willing to) maintain two UIs if one could suffice. Even if some features are hard properly to implement for modules vs. views at this time, I think we should plan the UI with the asusmption that some day we will implement them all; and in the meantime throw a NotImplementedException or something.
I agree that individual modules are a bad idea and Lmod does not magically fix things. The only way I use modules is I create an environment and then load all the modules that come with that environment. If I need to do something else, I unload everything, change my environment, and load up a new set of modules. I don't load modules piecemeal. And I don't use Lmod: it adds nothing to my use case, but is certainly harder to install, etc. I believe our goal with the "module" flavor of environments should be to create a script / module / whatever you want to call it that, when you source/load it, sets up your environment variables properly to use the stuff that was built in the environment, without creating symlinks on the disk. And that script should be named after the environment, it should not have hashes in it. Spack should also be able to set environment variables as such without first creating a script to do so. The script is because in many settings, users don't want to learn Spack or type the In summary, I see at least four ways that Spack can/should support loading things for use. I list them here from most heavyweight to least heavyweight: A. Create a view with symlinks, along with a module/script that sets env vars properly to use that view. Every time "module/script" is suggested, Spack could in theory be configurable to generate a module, or a simple bash script. I like bash scripts because they require no extra infrastructure. I like modules becuase they can be unloaded cleanly. Maybe (a) we can find a way to make bash scripts that can unload themselves cleanly, and (b) bash script generation can be set up as another form of module generation.
I think that's a great idea. ** But remember that Spack build environments are different from Spack Environments because Spack creates a different build environment for each package. If you're building a DAG with 30 packages in it, then Spack will, over the course of building that DAG, generate 30 build environments. It is rare that users want to explicitly use that environment, and it makes sense for Spack to generate these build environemnts in as lightweight a manner as possible, generating 30 sets of symlink trees would make no sense. Therefore, Spack uses option (C) above to generate them. In some cases, however, users want to re-trace the steps of Spack, using the build environment provided by Spack. This option should be explicitly available --- and Spack should be able to bring forth a build environment using any of the 4 modes described above. If we had this capability, then my guess is Spack Setup would not be needed. Remember: Spack Setup creates a script that sets up a build environment and then runs CMake. This could be replace, in a less CMake-specific manner, by option (C) above. I'm convincing myself that this is the direction Spack Setup should move in. Without Spack Setup, Spack builds every package in a DAG. With Spack Setup, it either builds or generates a setup script for each package in a DAG. Even if we re-do Spack Setup using the ideas above, syntax is still needed in the There is a known (to me) problem here. Suppose I'm building Spack Environment E, in which A and B are marked as setup and B->A. When E gets built, A and B will not get built; instead, setup scripts for them are generated. Spack module generation (and presumably any other way of Spack generating a set of env vars) decides what env vars to set by looking at the directories left behind by the installed package. At the time E is installed, A and B won't yet be installed, and the modules (or whatever) generated for them will be missing important paths (i.e. they don't work). This could also be a problem for the setup script generated for B, since A has not yet been installed. I don't know why I haven't run into that issue. Once you've built and installed A and B (manually), THEN Spack is able to generate correct modules (env var settings) for them. I've been hacking my way though this problem so far by regenerating all modules (across all of Spack) once I've build A and B. We will need a better way. This is going to be a problem we have to face for options (A) and (B) above --- basically any environment-loader generation that involves writing stuff to disk ahead of time and then using the environment without Spack. ** Another important difference between Spack Environments and Spack build environments is, Spack Environments can contain more than on DAG. Precedence / conflict resolution rules come into play that we don't have to worry about with a single build environemnt.
I really like the idea of a way to load Spack Environemnts (by whatever option above), and loading the compiler wrappers with them. Right now,
Wouldn't this make Spack itself stop working, since Spack sets environment variables when constructing build environments? Have you encountered this problem inside Spack itself? If not, why not? How would we work around it when Spack needs to build stuff?
I hope I've convinced you that symlink trees are not a uniformly better way to load a Spack Environemnt, as compared with setting env vars. If they were, then we would be re-engineering Spack to create symlink trees for every build environment it needs.
I don't see how this is affected by how you load your Spack Environment, or how modules prohibit that. Is this because modules involve hashes, which vary between platforms? Remember I'm suggesting we generate a module/script for the environment as a whole, which eliminates hashes. I think all 4 ways of loading an environment could be madke to be reliably cross-platform in this case.
Yes that should be part of generating a view.
Currently, individual-package moduels get generated, and Spack knows how to generate a module load script. I know that's not exactly the same as a per-environment modules. But it's pretty close. And I think we should use it as a stand-in for per-environment modules for now, and then switch to real per-environment modules later.
...that Spack uses them already internally. Other reasons include:
Can you elaborate? I don't understand. I think there are plenty of non-esoteric use cases already.
Remember that Spack Setup, as it exists today, does not provide the right compiler wrappers. It's still useful in spite of this obvious flaw, which I'd love to see fixed.
Will stacks require per-package modules, or per-environment modules? I'd love to see if there's a realistic way we can get rid of support for per-package modules. (That would probably involve a way to generate lightweight Spack Environments on the fly). |
|
@tgamblin @scheibelp This is passing tests now. |
…ed in the .spack subdirectory of the view; also by default, this view ignores conflicts for added specs
…command 'purge_empty_directories' to the filesystem module
… single public function to do both; rename filesystem.purge_empty_directories to filesystem.remove_empty_directories and update its docstring
…ly mention whether to enable a view
|
@tgamblin all comments are now addressed. For those where there was some back-and-forth conversation I left them open (but I consider everything resolved except #10017 (comment), which we agreed to handle later). |
|
As an aside on this #10017 (comment) I just added #11158 which should fit well for that purpose. |
I think we've addressed most of the points in the discussion.
Environments are nowm by default, created with views. When activated, if an environment includes a view, this view will be added to `PATH`, `CPATH`, and other shell variables to expose the Spack environment in the user's shell. Example: ``` spack env create e1 #by default this will maintain a view in the directory Spack maintains for the env spack env create e1 --with-view=/abs/path/to/anywhere spack env create e1 --without-view ``` The `spack.yaml` manifest file now looks like this: ``` spack: specs: - python view: true #or false, or a string ``` These commands can be used to control the view configuration for the active environment, without hand-editing the `spack.yaml` file: ``` spack env view enable spack env view envable /abs/path/to/anywhere spack env view disable ``` Views are automatically updated when specs are installed to an environment. A view only maintains one copy of any package. An environment may refer to a package multiple times, in particular if it appears as a dependency. This PR establishes a prioritization for which environment specs are added to views: a spec has higher priority if it was concretized first. This does not necessarily exactly match the order in which specs were added, for example, given `X->Z` and `Y->Z'`: ``` spack env activate e1 spack add X spack install Y # immediately concretizes and installs Y and Z' spack install # concretizes X and Z ``` In this case `Z'` will be favored over `Z`. Specs in the environment must be concrete and installed to be added to the view, so there is another minor ordering effect: by default the view maintained for the environment ignores file conflicts between packages. If packages are not installed in order, and there are file conflicts, then the version chosen depends on the order. Both ordering issues are avoided if `spack install`/`spack add` and `spack install <spec>` are not mixed.
Environments are nowm by default, created with views. When activated, if an environment includes a view, this view will be added to `PATH`, `CPATH`, and other shell variables to expose the Spack environment in the user's shell. Example: ``` spack env create e1 #by default this will maintain a view in the directory Spack maintains for the env spack env create e1 --with-view=/abs/path/to/anywhere spack env create e1 --without-view ``` The `spack.yaml` manifest file now looks like this: ``` spack: specs: - python view: true #or false, or a string ``` These commands can be used to control the view configuration for the active environment, without hand-editing the `spack.yaml` file: ``` spack env view enable spack env view envable /abs/path/to/anywhere spack env view disable ``` Views are automatically updated when specs are installed to an environment. A view only maintains one copy of any package. An environment may refer to a package multiple times, in particular if it appears as a dependency. This PR establishes a prioritization for which environment specs are added to views: a spec has higher priority if it was concretized first. This does not necessarily exactly match the order in which specs were added, for example, given `X->Z` and `Y->Z'`: ``` spack env activate e1 spack add X spack install Y # immediately concretizes and installs Y and Z' spack install # concretizes X and Z ``` In this case `Z'` will be favored over `Z`. Specs in the environment must be concrete and installed to be added to the view, so there is another minor ordering effect: by default the view maintained for the environment ignores file conflicts between packages. If packages are not installed in order, and there are file conflicts, then the version chosen depends on the order. Both ordering issues are avoided if `spack install`/`spack add` and `spack install <spec>` are not mixed.
Environments are nowm by default, created with views. When activated, if an environment includes a view, this view will be added to `PATH`, `CPATH`, and other shell variables to expose the Spack environment in the user's shell. Example: ``` spack env create e1 #by default this will maintain a view in the directory Spack maintains for the env spack env create e1 --with-view=/abs/path/to/anywhere spack env create e1 --without-view ``` The `spack.yaml` manifest file now looks like this: ``` spack: specs: - python view: true #or false, or a string ``` These commands can be used to control the view configuration for the active environment, without hand-editing the `spack.yaml` file: ``` spack env view enable spack env view envable /abs/path/to/anywhere spack env view disable ``` Views are automatically updated when specs are installed to an environment. A view only maintains one copy of any package. An environment may refer to a package multiple times, in particular if it appears as a dependency. This PR establishes a prioritization for which environment specs are added to views: a spec has higher priority if it was concretized first. This does not necessarily exactly match the order in which specs were added, for example, given `X->Z` and `Y->Z'`: ``` spack env activate e1 spack add X spack install Y # immediately concretizes and installs Y and Z' spack install # concretizes X and Z ``` In this case `Z'` will be favored over `Z`. Specs in the environment must be concrete and installed to be added to the view, so there is another minor ordering effect: by default the view maintained for the environment ignores file conflicts between packages. If packages are not installed in order, and there are file conflicts, then the version chosen depends on the order. Both ordering issues are avoided if `spack install`/`spack add` and `spack install <spec>` are not mixed.
|
@scheibelp any reason why the default is I always believed the default was the not to ignore conflicts. Also I do remember conflict exceptions when creating environment views, so pretty sure it's rather inconsistent then. |
This PR updates environments so that by default they are created with views. Currently my goal is to show how it works and get agreement on it. There are still tests etc. which are needed to complete it.
The
manifest.yamlfile now looks like:Existing environments will not automatically maintain views. I propose adding the following command to manipulate whether an env maintains a view (these commands aren't yet available):
spack env view --enable #by default create the viewspack env view --enable /abs/path/to/anywherespack env view --disablespack env view --show #show where the view is maintained(EDIT 4/8/19) The commands for managing a view for an environment have been added and have a slightly different syntax:
Views are automatically updated when specs are installed to an environment. A view only maintains one copy of any package. An environment may refer to a package multiple times, in particular if it appears as a dependency. This PR establishes a prioritization for which environment specs are added to views: a spec has higher priority if it was concretized first. This does not necessarily exactly match the order in which specs were added, for example, given
X->ZandY->Z':In this case
Z'will be favored overZ.Specs in the environment must be concrete and installed to be added to the view, so there is another minor ordering effect: by default the view maintained for the environment ignores file conflicts between packages. If packages are not installed in order, and there are file conflicts, then the version chosen depends on the order.
Both ordering issues are avoided if
spack install/spack addandspack install <spec>are not mixed.(UPDATE 4/8/19) When activated, if an environment includes a view, this view will be added to
PATH,CPATH, and other shell variables to expose the Spack environment in the user's shell.