Skip to content
This repository was archived by the owner on Sep 23, 2024. It is now read-only.

Preprint preview service: build fail, no error description #13

Open
mathieuboudreau opened this issue Jul 29, 2021 · 47 comments
Open

Preprint preview service: build fail, no error description #13

mathieuboudreau opened this issue Jul 29, 2021 · 47 comments
Assignees

Comments

@mathieuboudreau
Copy link
Member

Following up on: neurolibre/docs.neurolibre.org#13

Kiril was able to update the T1 book to use version 0.10.x of Jupyter Book (repo, site).

I forked and converted the repo in a branch to match the format in the docs, and tried a preview submission (https://roboneuro.herokuapp.com).

I am still encountering this error, with no accompanying messages.

Screen Shot 2021-07-20 at 1 48 22 PM

Any clue @agahkarakuzu ?

@ltetrel
Copy link

ltetrel commented Jul 29, 2021

Alright so after some investigations:

  1. The binder build is successfull (when triggered manually via https://binder.conp.cloud/), this includes the soft env build and session spawning. But the execution in the interactive environment does not seems successfull (figures are empty), same behaviour for the public mybinder.org (so you should definitively check your code). You should also check this section https://mybinder.readthedocs.io/en/latest/tutorials/dockerfile.html if you definitively need Dockerfile (which is not the recommended way to have a binder environment).
  2. The jupyter-book build is correctly triggered when I call the API manually, BUT it cannot execute the jupyter-book build command. This is because you forgot to add it as a dependency as explained here: https://docs.neurolibre.org/en/latest/SUBMIT.html#runtime
    FYI: this is the command on the backend:
    docker run -v /DATA:/home/jovyan/data binder-registry.conp.cloud/binder-registry.conp.cloud/binder-qmrlab-2dt1-5fbook-5fupdate-145d0a:0a148bc97a8799fe4599e99f9168555df1aeb42f jupyter-book build --path-output /home/jovyan/data/book-artifacts/qMRLab/github.com/t1_book_update/0a148bc97a8799fe4599e99f9168555df1aeb42f content
  3. I don't know why the front-end does not correctly trigger the book build on our API, this is something @agahkarakuzu should be able to check (it could be because you are using a branch). When fixed, you will have access to the build logs like http://neurolibre-data.conp.cloud/book-artifacts/qMRLab/github.com/t1_book_update/0a148bc97a8799fe4599e99f9168555df1aeb42f/book-build.log

@mathieuboudreau
Copy link
Member Author

mathieuboudreau commented Aug 30, 2021

The jupyter-book build is correctly triggered when I call the API manually, BUT it cannot execute the jupyter-book build command. This is because you forgot to add it as a dependency as explained here: https://docs.neurolibre.org/en/latest/SUBMIT.html#runtime
FYI: this is the command on the backend: 2. docker run -v /DATA:/home/jovyan/data binder-registry.conp.cloud/binder-registry.conp.cloud/binder-qmrlab-2dt1-5fbook-5fupdate-145d0a:0a148bc97a8799fe4599e99f9168555df1aeb42f jupyter-book build --path-output /home/jovyan/data/book-artifacts/qMRLab/github.com/t1_book_update/0a148bc97a8799fe4599e99f9168555df1aeb42f content

This point isn't clear to me. 1- I do have a content/ folder in my branch (see here: https://github.com/qMRLab/t1_book_update/tree/neurolibre/content), 2- I do have a binder/ folder in my branch (see here:https://github.com/qMRLab/t1_book_update/tree/neurolibre/binder), however I use Dockerfile instead of requirements.txt, and since Binder skips requirements.txt if a Dockerfile is present, I didn't think I needed to add it, and 3- I don't see that docker run command anywhere discussed in the NeuroLibre documentations, so how could I have figured that out by myself to test?

@mathieuboudreau
Copy link
Member Author

Should I just add a requirements.txt file with only

jupyter-book
jupytext

as dependencies? Will this still work with my Dockerfile present in the folder? I'll try and see

@mathieuboudreau
Copy link
Member Author

I just tried, and still got the same error immediately with no details:

Screen Shot 2021-08-30 at 12 08 02 PM

Is it possible it's just incompatible with the RoboNeuro test servers? Should I try and submit my T1 book as-is and see if it will work on the live servers?

@mathieuboudreau
Copy link
Member Author

Do we have any live demos of a repository that RoboNeuro was tested on that I can try and duplicate the settings for my own with?

@agahkarakuzu
Copy link
Member

@mathieuboudreau jupyter-book and jupytext won't be installed via requirements.txt as Dockerfile is not using that file. Can you add the minimal pip requirements of neurolibre to the pip list in the Dockerfile?

If works, we can have additional instructions or even base docker images for neurolibre.

@mathieuboudreau
Copy link
Member Author

Ahh gotcha; though, RoboNeuro fails for me within a fraction of a second, so I doubt it even has time to run the Dockerfile. I'll update it regardless, but I don't think that's the only problem I'm encountering.

@mathieuboudreau
Copy link
Member Author

mathieuboudreau commented Aug 31, 2021

Done (repo), and as expected, got the same error:

Screen Shot 2021-08-31 at 1 45 23 PM

Is roboneuro maybe not compatible with Dockerfiles?

@agahkarakuzu
Copy link
Member

agahkarakuzu commented Sep 1, 2021

OK, the issue was caused because of a type casting bug, mostly because I am not a savvy ruby programmer :) The new deployment resolved the problem, now you can use the service for this repo/branch.

The image built was successful, but the book build failed, here's the log:

[FATAL tini (6)] exec jupyter-book failed: No such file or directory

@mathieuboudreau
Copy link
Member Author

Great, thanks Agah!

1- Is there a way to test RoboNeuro using a continuous integration? If so, it would be great to add this exception as a test.

2- I did get further with your change, I encountered this screen now:

Screen Shot 2021-09-01 at 1 07 32 PM

However, clicking the link I get a new error:

Screen Shot 2021-09-01 at 1 07 36 PM

Here is the link written out: http://neurolibre-data.conp.cloud/book-artifacts/qMRLab/github.com/t1_book_update/b720a326569f289a828b391bf174bba969da0cd7/_build/html/

It appears to be trying to get the _build/html/ folder of that repo, however _build is actually contained within a parent folder called content/ (see: https://github.com/qMRLab/t1_book_update/tree/neurolibre/content/_build/html) per what I understood from your guidelines in the readthedocs (see: https://docs.neurolibre.org/en/latest/SUBMIT.html#preprint-repository-structure).

Any clue?

@agahkarakuzu
Copy link
Member

@mathieuboudreau interesting, unless you fixed the book build error, it should be displaying that the book build fails. Do you have logs mailed to you indicating that the builds were successful?

So far I did not run into a directory problem, when the book build is complete, the links worked.

@ltetrel
Copy link

ltetrel commented Sep 7, 2021

The latest successfull binder build was at 13h43 Sept 2d (and that exists both locally, and on the registry):

[I 210902 13:43:06 build:321] Started build build-qmrlab-2dt1-5fbook-5fupdate-145d0a-1dc-1f
[I 210902 13:43:06 build:323] Watching build pod build-qmrlab-2dt1-5fbook-5fupdate-145d0a-1dc-1f
[I 210902 13:43:13 build:355] Watching logs of build-qmrlab-2dt1-5fbook-5fupdate-145d0a-1dc-1f
[I 210902 13:45:45 build:385] Finished streaming logs of build-qmrlab-2dt1-5fbook-5fupdate-145d0a-1dc-1f
[I 210902 13:45:46 builder:518] Launching pod for https://github.com/qMRLab/t1_book_update: 1 other pods running this repo (2 total)
[I 210902 13:45:46 launcher:166] Creating user qmrlab-t1_book_update-ogjaoz6o for image binder-registry.conp.cloud/binder-registry.conp.cloud/binder-qmrlab-2dt1-5fbook-5fupdate-145d0a:1dc82c7a86129e974859642e6349cb8267c2df62
[I 210902 13:45:47 launcher:213] Starting server for user qmrlab-t1_book_update-ogjaoz6o with image binder-registry.conp.cloud/binder-registry.conp.cloud/binder-qmrlab-2dt1-5fbook-5fupdate-145d0a:1dc82c7a86129e974859642e6349cb8267c2df62
[I 210902 13:46:12 builder:603] Launched https://github.com/qMRLab/t1_book_update in 26s
[I 210902 13:47:12 log:140] 200 GET /build/gh/qMRLab/t1_book_update.git/1dc82c7a86129e974859642e6349cb8267c2df62 ([email protected]) 246033.80ms

@ltetrel
Copy link

ltetrel commented Sep 8, 2021

So I tried to build your environment yesterday and today. I tried to launch the build manually on https://binder.conp.cloud/
and it takes forever.
Unfortunately I have no clues why it is taking so long, but it seem that pip has trouble to find your apps:

INFO: pip is looking at multiple versions of linkify-it-py to determine which version is compatible with other requirements. This could take a while.

You will need to reduce and optimize the Dockerfile (make sure you are not re-installing softwares that already exists in jupyter/base-notebook:8ccdfc1da8d5, fix your versions etc...).

On July 29th, the build was successful so something is wrong with your newest updates:

Alright so after some investigations:

1. The binder build is successfull (when triggered manually via https://binder.conp.cloud/), this includes the soft env build and session spawning. But the execution in the interactive environment does not seems successfull (figures are empty), same behaviour for the public mybinder.org (so you should definitively check your code). You should also check this section https://mybinder.readthedocs.io/en/latest/tutorials/dockerfile.html if you definitively need Dockerfile (which is not the recommended way to have a binder environment).

@ltetrel
Copy link

ltetrel commented Sep 8, 2021

Again, I want to re-emphase that using Dockerfiles is not the preferable way to spawn a neurolibre instance, exactly because of this kind of issues.

@mathieuboudreau
Copy link
Member Author

Hi @ltetrel,

Thanks for that heads up. While I understand, it's impossible to configure the environment for this notebook with only requirements/env/wtv other files. Also, as more and more people are adopting Docker and Singularity, I think this will be a necessary service for NeuroLibre to provide, in my opinion.

@mathieuboudreau
Copy link
Member Author

I'll try again, but I'm fairly certain that this dockerfile built on MyBinder, I'll try again

@mathieuboudreau
Copy link
Member Author

I'm wonder if this recent feature (and/or bug) in pipy might be the cause of the much longer build time now:

pypa/pip#10201

https://pip.pypa.io/en/stable/user_guide/#dependency-resolution-backtracking

As I'm experiencing something similar in the MyBinder build log:

Screen Shot 2021-09-08 at 10 48 05 AM

It's not stuck here, but I've been downloading different versions of the attrs pip packages for like 20 minutes now.

@mathieuboudreau
Copy link
Member Author

(no idea why it's taking 5+ minutes for 40kb packages though)

@mathieuboudreau
Copy link
Member Author

Here's some tips on how to deal with the new pip dependency resolver: https://stackoverflow.com/questions/65122957/resolving-new-pip-backtracking-runtime-issue

@ltetrel
Copy link

ltetrel commented Sep 8, 2021

I'm wonder if this recent feature (and/or bug) in pipy might be the cause of the much longer build time now:

pypa/pip#10201

https://pip.pypa.io/en/stable/user_guide/#dependency-resolution-backtracking

As I'm experiencing something similar in the MyBinder build log:

Screen Shot 2021-09-08 at 10 48 05 AM

It's not stuck here, but I've been downloading different versions of the attrs pip packages for like 20 minutes now.

Good catch! I think for this current submission, the best advice would be to reduce the complexity of the dependencies https://pip.pypa.io/en/stable/user_guide/#audit-your-top-level-requirements

@mathieuboudreau
Copy link
Member Author

Yeah I saw that; the pip dependencies that we explicitly specify in the Dockerfile are the minimum ones required for us, so I might try using the legacy resolver since we're in a rush a the moment.

@ltetrel
Copy link

ltetrel commented Sep 8, 2021

I will add a note on that issue in the docs, since it could happen to other people with complex submission as well.

@mathieuboudreau
Copy link
Member Author

So, by adding this line to the Dockerfile,

https://github.com/qMRLab/t1_book_update/blob/bc68bec6de695b85473e2a2e9bf2644668c3f45b/binder/Dockerfile#L45

the image built much quicker on MyBinder and executed correctly,

Screen Shot 2021-09-08 at 11 41 34 AM

However, when making the same change in the neurolibre branch (which my MyBinder branch forked out of),

https://github.com/qMRLab/t1_book_update/blob/ceee653c23f2f07012f9841e0ab086be210adff7/binder/Dockerfile#L45

RoboNeuro immediately failed without building again, and did not send me a log.

Screen Shot 2021-09-08 at 11 41 51 AM

@agahkarakuzu how did you resolve this the other day?

@agahkarakuzu
Copy link
Member

agahkarakuzu commented Sep 8, 2021

@mathieuboudreau I gave it a try

qmrlab/t1_book_upate neurolibre branch, is this the one?

If so, @ltetrel I receive 409 as response to that request.

@mathieuboudreau
Copy link
Member Author

yup, I tried twice

@ltetrel
Copy link

ltetrel commented Sep 8, 2021

@mathieuboudreau I gave it a try

qmrlab/t1_book_upate neurolibre branch, is this the one?

If so, @ltetrel I receive 409 as response to that request.

Yes, there is a lock file that I just deleted manually because of the previous on-going build (still need to better manage them). Can you re-try now?

Also I think it would be important to propagate all the errors (that the API returns) to the user @agahkarakuzu, as well as the build log. From the user perspective, receiving something empty is really frustrating 😿

@agahkarakuzu
Copy link
Member

@ltetrel I agree! I will need to deal with that 409 case, it is a bit tricky, I will sit down and think about it :D

Now it is running without 409, will let you know.

@agahkarakuzu
Copy link
Member

@ltetrel what should we say to the user in case 409 by the way? Something like You run out of tests allowed for this user/repo, get in touch with us ? What would be the solution in this case?

@ltetrel
Copy link

ltetrel commented Sep 8, 2021

@ltetrel what should we say to the user in case 409 by the way? Something like You run out of tests allowed for this user/repo, get in touch with us ? What would be the solution in this case?

You are not allowed to run more than one submission per repository, please re-try when the previous request finished. ?

@ltetrel
Copy link

ltetrel commented Sep 8, 2021

And a message saying that if the submission is taking too long (what happened here), then they should get in touch with us by e-mail.

@agahkarakuzu
Copy link
Member

It may have terminated because of the front-end worker:

 SidekiqStatus::Container::StatusNotFound: 5eadb14e27a4c7ebfdda9346

Do you emit something specific when the build is timed out on your end?

@ltetrel
Copy link

ltetrel commented Sep 8, 2021

No I don't manage that, I don't have control over that this info is on the binderhub cluster.
The only thing I can do is to stream back the binder build logs (which says at the end that the build timed-out).

@ltetrel
Copy link

ltetrel commented Sep 9, 2021

@mathieuboudreau Now that the image is correctly built on the server, I found that:
The repository layout is not correct since you are changing it in your Dockerfile (in ~/work/t1_book_update), and pulling the repo form github (https://github.com/qMRLab/t1_book_update/blob/ceee653c23f2f07012f9841e0ab086be210adff7/content/Dockerfile#L64).
as a consequence the jupyter-book build on our backend fails:

Usage: jupyter-book build [OPTIONS] PATH_SOURCE
Try 'jupyter-book build -h' for help.

Error: Invalid value for 'PATH_SOURCE': Path 'content' does not exist.

I would advice you to use the COPY docker command to copy directly from the repo into the image ex:

COPY . ~/work/t1_book_update/

@ltetrel
Copy link

ltetrel commented Sep 10, 2021

@mathieuboudreau Also be carefull with this line, basically all your data will be saved into the image, which makes it really large (currently 3.4G).
Large docker images makes the spawn time really long, impact your computing performance in the environment, and takes an incredible amount of space in our servers (data length x number of versions you push).

You should use a data requirement for this type of behaviour: https://docs.neurolibre.org/en/latest/SUBMIT.html#data

@mathieuboudreau
Copy link
Member Author

@mathieuboudreau Also be carefull with this line, basically all your data will be saved into the image, which makes it really large (currently 3.4G).
Large docker images makes the spawn time really long, impact your computing performance in the environment, and takes an incredible amount of space in our servers (data length x number of versions you push).

You should use a data requirement for this type of behaviour: https://docs.neurolibre.org/en/latest/SUBMIT.html#data

Thanks for the tip Loic. I don't think we're actually downloading any data using repo-2-docker in this instance; the build size is due to our dependencies, but I'll double check. I think we just added that line once upon a time for testing purposes, while testing out your package.

@mathieuboudreau
Copy link
Member Author

@mathieuboudreau Now that the image is correctly built on the server, I found that:
The repository layout is not correct since you are changing it in your Dockerfile (in ~/work/t1_book_update), and pulling the repo form github (https://github.com/qMRLab/t1_book_update/blob/ceee653c23f2f07012f9841e0ab086be210adff7/content/Dockerfile#L64).
as a consequence the jupyter-book build on our backend fails:

Usage: jupyter-book build [OPTIONS] PATH_SOURCE
Try 'jupyter-book build -h' for help.

Error: Invalid value for 'PATH_SOURCE': Path 'content' does not exist.

I would advice you to use the COPY docker command to copy directly from the repo into the image ex:

COPY . ~/work/t1_book_update/

Hmmmm, I understand the problem. However I don't think I can just copy the repo either, because I need to run the commands below it in the same "session" (i.e. different RUNs don't "see" each other, as far as I understand it if I copy the repo that way a run below it won't be able to execute:

https://github.com/qMRLab/t1_book_update/blob/ceee653c23f2f07012f9841e0ab086be210adff7/content/Dockerfile#L66-L75

, but maybe @agahkarakuzu can correct me).

@mathieuboudreau
Copy link
Member Author

Documentation will need to be added, as when I read the NeuroLibre doc I wasn't informed of this limitation re: Dockerfiles and how to configure it correctly.

@ltetrel
Copy link

ltetrel commented Sep 13, 2021

Documentation will need to be added, as when I read the NeuroLibre doc I wasn't informed of this limitation re: Dockerfiles and how to configure it correctly.

Yep, I made lot of changes these days. Can you check it now to see if it is more clear ?

@mathieuboudreau
Copy link
Member Author

@ltetrel do we have a demo repo that builds on RoboNeuro that you guys tested and validated the service on, so that I can go in and explore the directory structure inside the Docker image and see how it should look like for the Jupyter Book to work correctly?

@mathieuboudreau
Copy link
Member Author

mathieuboudreau commented Sep 13, 2021

I'd like to avoid too many RoboNeuro builds, and am afraid that blindly debugging the Dockerfile to get the correct setting will take several tries.

@ltetrel
Copy link

ltetrel commented Sep 13, 2021

You can check this one: https://github.com/ltetrel/nha2020-nilearn
On the website, you can check the python template also: https://docs.neurolibre.org/en/latest/SUBMIT.html#quickstart-preprint-templates
Agah may have others I think.

@mathieuboudreau
Copy link
Member Author

@mathieuboudreau Now that the image is correctly built on the server, I found that:
The repository layout is not correct since you are changing it in your Dockerfile (in ~/work/t1_book_update), and pulling the repo form github (https://github.com/qMRLab/t1_book_update/blob/ceee653c23f2f07012f9841e0ab086be210adff7/content/Dockerfile#L64).
as a consequence the jupyter-book build on our backend fails:

Usage: jupyter-book build [OPTIONS] PATH_SOURCE
Try 'jupyter-book build -h' for help.

Error: Invalid value for 'PATH_SOURCE': Path 'content' does not exist.

I would advice you to use the COPY docker command to copy directly from the repo into the image ex:

COPY . ~/work/t1_book_update/

The principal solution to this issue was resolving a bug where I was checking out a repo after cloning it, but before opening the directory.

See: qMRLab/t1_book_update@3bb10c9

@mathieuboudreau
Copy link
Member Author

I then encountered another issue,

Traceback (most recent call last):
  File "/opt/conda/lib/python3.6/site-packages/jupyter_book/cli/main.py", line 244, in build
    parse_toc_yaml(toc)
  File "/opt/conda/lib/python3.6/site-packages/sphinx_external_toc/parsing.py", line 82, in parse_toc_yaml
    return parse_toc_data(data)
  File "/opt/conda/lib/python3.6/site-packages/sphinx_external_toc/parsing.py", line 88, in parse_toc_data
    raise MalformedError(f"toc is not a mapping: {type(data)}")
sphinx_external_toc.parsing.MalformedError: toc is not a mapping: <class 'list'>

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/opt/conda/bin/jupyter-book", line 8, in <module>
    sys.exit(main())
  File "/opt/conda/lib/python3.6/site-packages/click/core.py", line 1137, in __call__
    return self.main(*args, **kwargs)
  File "/opt/conda/lib/python3.6/site-packages/click/core.py", line 1062, in main
    rv = self.invoke(ctx)
  File "/opt/conda/lib/python3.6/site-packages/click/core.py", line 1668, in invoke
    return _process_result(sub_ctx.command.invoke(sub_ctx))
  File "/opt/conda/lib/python3.6/site-packages/click/core.py", line 1404, in invoke
    return ctx.invoke(self.callback, **ctx.params)
  File "/opt/conda/lib/python3.6/site-packages/click/core.py", line 763, in invoke
    return __callback(*args, **kwargs)
  File "/opt/conda/lib/python3.6/site-packages/jupyter_book/cli/main.py", line 247, in build
    f"The Table of Contents file is malformed: {exc}\n"
  File "/opt/conda/lib/python3.6/site-packages/jupyter_book/utils.py", line 48, in _error
    raise kind(box)
RuntimeError: 
[91m===============================================================================[0m

The Table of Contents file is malformed: toc is not a mapping: <class 'list'>
You may need to migrate from the old format, using:

	jupyter-book toc migrate /home/jovyan/work/t1_book_update/content/_toc.yml -o /home/jovyan/work/t1_book_update/content/_toc.yml

[91m===============================================================================[0m

Running Jupyter-Book v0.11.3

which was resolved by reverting jupyter-book to exactly the version 0.10.0 (the version that it was initially built for). Any versions above, it did not work due to 1- a weird SoS issue during build and/or 2- mismatch in how the ToC needs to be formatted with the newer versions.

@mathieuboudreau
Copy link
Member Author

After a lot of back and forth due to my repo's RoboNeuro files being locked after each build, no logs being sent to me when errors occured, and no UI messages to me on the web, and a lot of help by @ltetrel, I was finally able to successfully build a book using RoboNeuro!

Screen Shot 2021-09-15 at 12 33 07 PM

Before submitting officially, I'll have to do a little bit of cleanup in the book due to some of the transition needed from the old jupyter book format to the current one (the equations, figures aren't centered anymore), but that should be quick I'd say.

Here's some other things of note:

When my book finished building, it didn't alert me in the browser. Instead, it showed:

Screen Shot 2021-09-15 at 12 35 02 PM

But if I went back and built again, it said it found the book and opened it.

Also, while RoboNeuro allows me to give a branch or git ref, the submission doesn't:

Screen Shot 2021-09-15 at 12 36 11 PM

@ltetrel
Copy link

ltetrel commented Sep 15, 2021

Congrats for the submission!

RoboNeuro files being locked after each build

I tried "manually" to submit your repo (calling the API via command line instead the website or RoboNeuro) and the lock file is correctly being deleted. I suspect that what happened is that when the user (you) close his session (the submission page), the API call is somehow canceled. As a result the submission building is canceled and the lock file stays on the server (so the API treat the submission as currently active).
I added a mechanism where the lock file would automatically be deleted after 30min, so it should prevent user to be locked in the future. But @agahkarakuzu you should check how RoboNeuro manage users closing their session (if that is the issue, and not something else).

@notatallshaw
Copy link

notatallshaw commented Sep 16, 2021

You pinged pypa/pip#10201 so I took a quick look at the docker file: https://github.com/qMRLab/t1_book_update/blob/neurolibre/binder/Dockerfile#L45

And I build an independent requirements file based on this in which I was able to reproduce the excessive backtracking issue:

octave_kernel
sos==0.17.7
sos-notebook==0.17.2
sos-python==0.9.12.1
sos-bash==0.12.3
sos-matlab==0.9.12.1
sos-ruby==0.9.15.0
sos-sas==0.9.12.3
sos-julia==0.9.12.1
sos-javascript==0.9.12.2
sos-r==0.9.12.2
scipy
plotly==3.10.0
flask
ipywidgets
nbconvert==5.4.0
jupyterlab==2.2.0 
jupyter-book==0.10.0
jupytext
repo2data

I tested it using the optimized version of backtracking I have proposed here: pypa/pip#10479 . It quickly produced this error:

ERROR: Cannot install -r .\reqs.txt (line 18), -r .\reqs.txt (line 3), jupyter-book and nbconvert==5.4.0 because these package versions have conflicting dependencies.

The conflict is caused by:
    The user requested nbconvert==5.4.0
    sos-notebook 0.17.2 depends on nbconvert>=5.1.1
    jupyter-book 0.10.0 depends on nbconvert<6
    myst-nb 0.11.1 depends on nbconvert~=5.6

In my testing if I loosen the user nbconvert requirement from nbconvert==5.4.0 to nbconvert I am able to install the requirements even on regular Pip 21.2.4.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants