Conversation
if sys.platform != "darwin":
compiler_opts.extend([
'--with-cpp=cpp',
'--with-cxxcpp=cpp',
])seems undesirable. Is there something we should add to PETSc ./configure so you don't need this stuff?
|
|
Regarding (2): Yes, this would be nice. I don't know why PETSc prefers the system Regarding (4): If there's a problem with installing a package on a new OS release, or using a newer version of Clang or GCC, or with a new version of OpenSSL, then Spack can provide a new package within hours. Software releases are usually not that fast. It's the same case as for providing Ubuntu, Debian, Homebrew, and MacPorts packages. However, Spack does have a scalability problem that your suggestion would address. The long-term solution is probably somewhere in the middle, maybe with PETSc providing a base package description, and some other mechanism being able to override it in emergencies. |
@alalazo @davydden: you're credited with the boost arguments here -- does PETSc build ok if you don't include them? @BarrySmith says we don't need them (but then why does PETSc accept boost args?) |
There are certainly use cases where having PETSc maintain a That said, Spack has external repositories ( |
This is helpful and probably what we should do when we add longer, optional tests. I believe the test currently in there is supposed to be a simple smoke test. Running the actual PETSc tests would be even better. |
|
@BarrySmith: regarding (2), see #1835, where the related discussion with @eschnett is. Maybe this is a PETSc issue we could remove for future versions. |
|
@BarrySmith : how to handle cross compiling build for petsc in spack? |
|
Cross compiling is done using the argument --with-batch to ./configure. What happens with --with-batch is ./configure runs as usual except it does not attempt to RUN any compiled code instead it saves all the run tests into a single C file. When ./configure is done it tells the user to run a generated executable; (on a batch system you would submit the executable through the batch system, with straight cross compiling you would run the executable on that other system). The executable produces a new python script that the user than runs that completes the configure process. Then make and make install in the usual way. Try it. You don't need a batch system to use --with-batch you can run it with -with-batch on a laptop to get an idea how it works. If you make spack able to "submit jobs" then you could easily utilize the --with-batch option. Otherwise you will have to do what we do and interupt spack to have the user run the executable and then restart spack with the new information. |
|
@eschnett re: (2) Could this issue be solved by copying patched versions of the system headers into the |
|
The reason for (3) was to have quich tests of external solvers/preconditioners. This helped in the past to pick up broken MUMPS build on macOS. If PETS has somewhere tests with, say, laplace equation with superlu-dist, mumps, hypre -- we can switch to running those as native make targets. Even more convenient would be to have those in default "make tests".
|
I am currently trying a build, but I would expect so from a variant. @BarrySmith Looking at the log, it seems I added boost in f01d1c4 As I did for a lot of other software, I packaged it just looking at the configure (or other relevant build information), which in this case says for boost : EDIT : build without boost was successfull on Ubuntu 14.04 + [email protected] + [email protected] |
|
@BarrySmith: is that stuff just so PETSc can download boost if it needs it for some dependency? |
Spack packages should not be downloading anything during installation.
You can also fork the GitHub repo. But I don't think either case really provides a scalable solution. Maybe one could have a scheme that depends on Git to merge 100 independently curated repos. Spack should not be building OpenSSL if at all possible, so no need for Spack updates here. I doubt the typical Spack user has the information required to keep it up to date. |
Yes. But PETSc can act like a standalone installer (it has its own rather fancy build system that does this). So the question is really for @BarrySmith to clarify why the boost args are there, if PETSc doesn't depend on boost. Which brings up another question for @BarrySmith: if we omit the boost args, are there cases where PETSc will then try to download boost? Seems like we might need to add |
|
Yes, PETSc has a variety of packages it can download for dependencies that are not needed by PETSc. In theory the self.useddirectly = 0 # 1 indicates used by PETSc directly, 0 indicates used by a package used by PETScbut I see that is missing from boost.py (and likely others). I will fix those. You don't need a -- |
|
@BarrySmith: So it sounds like we'll still need to keep the boost arguments for old versions of PETSc where the error persists, or the build will complain, right? |
Packages that try to "help" the user by downloading their own dependencies Back to the real world... as I said before, packages that download their a) It creates two different versions of a package, breaking Spack's b) It doesn't work running Spack on off-line supercomputers. I would set --download-XXX=no for everything you can, just to be safe. No Of the things that are looked for by default... are they required? If they If the answer is (a), then I would make those things be optional |
I don't see a reason not to do this either -- @BarrySmith: is there some way to just disable the external downloads entirely? |
|
These are the only ones you need turn off --download-c2html=0 --download-hwloc=0 and if you don't want to link against ssl have --with-ssl=0 You don't need to turn off all the --download because the above 2 are the only ones on by default. |
|
I don't think you should be in the business of installing PETSc before the 3.6 series. This will simplify somethings. It seems bad to have the petsc.py file be complicated just to handle ancient releases. |
|
Ancient releases of PETSc are required for ancient versions of application What does linking with SSL gain you? Can it be a Spack variant? (Although On Wed, Oct 12, 2016 at 1:48 PM, Barry Smith [email protected]
|
|
@BarrySmith: Thanks. As we get Spack going, the intent is to have it provide some measure of reproducibility, so we'll probably want to do things like this one day. Not oppose to cutting off PETSc before 3.6 right now though. Not sure what @davydden and some of the other dependent library maintainers need. |
|
@citibeth: We currently don't actually have any mainline packages with an upper bound lower than 3.6: Do you know of any packages that require 3.5 or earlier? |
|
I am fine with So at this point I don't think it will simplify much. |
|
@tgamblin see comments at the end of PISM: I have not yet coded that information into Spack but it's the reality If supporting old versions is REALLY too hard, one option would be to On Wed, Oct 12, 2016 at 1:54 PM, Todd Gamblin [email protected]
|
|
This issue interface would be much better if I could respond directly to specific comments above instead of having to add a new comment at the end and try to connect the context with a comment up 5 comments ago. Is there a way I can attach comments directly to particular other comments? I reason I suggested cutting off at 3.6 was this seeming dependence on boost which should only exist for older versions of PETSc. This suggestion was motivated by Todd's comment: "@BarrySmith: So it sounds like we'll still need to keep the boost arguments for old versions of PETSc where the error persists, or the build will complain, right?" If you do keep old versions then could you just list the boost dependencies for PETSc versions that need it which would not be 3.6 and 3.7 --with-ssl doesn't buy you anything if great value so I think you should stick to just turning it off in spack In response to "The reason for (3) was to have quich tests of external solvers/preconditioners. This helped in the past to pick up broken MUMPS build on macOS. If PETS has somewhere tests with, say, laplace equation with superlu-dist, mumps, hypre -- we can switch to running those as native make targets. Even more convenient would be to have those in default "make tests"." This is a good idea, I'll add to our smoke test, tests for the included dependencies such as hypre, superlu_dist, mumps so you will only need to run our smoke test and you'll be assured we are checking the external packages as well.. |
Not really :-( The best one could probably do is to use quotes (
Thanks @BarrySmith . Once it is there in the next release, i think we could cut those tests off of Spack. |
|
@citibeth: thanks -- I don't think it's that hard to keep
That sounds good to me. We can put |
If you comment directly in the code (which was actually why I created the PR) you can have separate threads for each in-code comment. But that's about it. |
This is a dummy pull request created so that developers of xSDK packages can review their
package.pyfiles in Spack. This file is already integrated in Spack, we're just making a PR so that it can be commented on.See the package.py file in the mainline
developbranch to see blame for particular lines of code. This can be informative, because you can see the commits the code came from and why it's there.