-
Notifications
You must be signed in to change notification settings - Fork 2
Preprint preview service: build fail, no error description #13
Comments
Alright so after some investigations:
|
This point isn't clear to me. 1- I do have a |
Should I just add a requirements.txt file with only
as dependencies? Will this still work with my Dockerfile present in the folder? I'll try and see |
Do we have any live demos of a repository that RoboNeuro was tested on that I can try and duplicate the settings for my own with? |
@mathieuboudreau jupyter-book and jupytext won't be installed via requirements.txt as Dockerfile is not using that file. Can you add the minimal pip requirements of neurolibre to the pip list in the Dockerfile? If works, we can have additional instructions or even base docker images for neurolibre. |
Ahh gotcha; though, RoboNeuro fails for me within a fraction of a second, so I doubt it even has time to run the Dockerfile. I'll update it regardless, but I don't think that's the only problem I'm encountering. |
Done (repo), and as expected, got the same error: Is roboneuro maybe not compatible with Dockerfiles? |
OK, the issue was caused because of a type casting bug, mostly because I am not a savvy ruby programmer :) The new deployment resolved the problem, now you can use the service for this repo/branch. The image built was successful, but the book build failed, here's the log:
|
@mathieuboudreau interesting, unless you fixed the book build error, it should be displaying that the book build fails. Do you have logs mailed to you indicating that the builds were successful? So far I did not run into a directory problem, when the book build is complete, the links worked. |
@mathieuboudreau here is the error log that the API should have return to you: |
The latest successfull binder build was at 13h43 Sept 2d (and that exists both locally, and on the registry):
|
So I tried to build your environment yesterday and today. I tried to launch the build manually on https://binder.conp.cloud/
You will need to reduce and optimize the Dockerfile (make sure you are not re-installing softwares that already exists in jupyter/base-notebook:8ccdfc1da8d5, fix your versions etc...). On July 29th, the build was successful so something is wrong with your newest updates:
|
Again, I want to re-emphase that using Dockerfiles is not the preferable way to spawn a neurolibre instance, exactly because of this kind of issues. |
Hi @ltetrel, Thanks for that heads up. While I understand, it's impossible to configure the environment for this notebook with only requirements/env/wtv other files. Also, as more and more people are adopting Docker and Singularity, I think this will be a necessary service for NeuroLibre to provide, in my opinion. |
I'll try again, but I'm fairly certain that this dockerfile built on MyBinder, I'll try again |
I'm wonder if this recent feature (and/or bug) in pipy might be the cause of the much longer build time now: As I'm experiencing something similar in the MyBinder build log: It's not stuck here, but I've been downloading different versions of the |
(no idea why it's taking 5+ minutes for 40kb packages though) |
Here's some tips on how to deal with the new pip dependency resolver: https://stackoverflow.com/questions/65122957/resolving-new-pip-backtracking-runtime-issue |
Good catch! I think for this current submission, the best advice would be to reduce the complexity of the dependencies https://pip.pypa.io/en/stable/user_guide/#audit-your-top-level-requirements |
Yeah I saw that; the pip dependencies that we explicitly specify in the Dockerfile are the minimum ones required for us, so I might try using the legacy resolver since we're in a rush a the moment. |
I will add a note on that issue in the docs, since it could happen to other people with complex submission as well. |
So, by adding this line to the Dockerfile, the image built much quicker on MyBinder and executed correctly, However, when making the same change in the neurolibre branch (which my MyBinder branch forked out of), RoboNeuro immediately failed without building again, and did not send me a log. @agahkarakuzu how did you resolve this the other day? |
@mathieuboudreau I gave it a try
If so, @ltetrel I receive |
yup, I tried twice |
Yes, there is a lock file that I just deleted manually because of the previous on-going build (still need to better manage them). Can you re-try now? Also I think it would be important to propagate all the errors (that the API returns) to the user @agahkarakuzu, as well as the build log. From the user perspective, receiving something empty is really frustrating 😿 |
@ltetrel I agree! I will need to deal with that 409 case, it is a bit tricky, I will sit down and think about it :D Now it is running without 409, will let you know. |
@ltetrel what should we say to the user in case |
|
And a message saying that if the submission is taking too long (what happened here), then they should get in touch with us by e-mail. |
It may have terminated because of the front-end worker:
Do you emit something specific when the build is timed out on your end? |
No I don't manage that, I don't have control over that this info is on the binderhub cluster. |
@mathieuboudreau Now that the image is correctly built on the server, I found that:
I would advice you to use the
|
@mathieuboudreau Also be carefull with this line, basically all your data will be saved into the image, which makes it really large (currently 3.4G). You should use a data requirement for this type of behaviour: https://docs.neurolibre.org/en/latest/SUBMIT.html#data |
Thanks for the tip Loic. I don't think we're actually downloading any data using repo-2-docker in this instance; the build size is due to our dependencies, but I'll double check. I think we just added that line once upon a time for testing purposes, while testing out your package. |
Hmmmm, I understand the problem. However I don't think I can just copy the repo either, because I need to run the commands below it in the same "session" (i.e. different RUNs don't "see" each other, as far as I understand it if I copy the repo that way a run below it won't be able to execute: , but maybe @agahkarakuzu can correct me). |
Documentation will need to be added, as when I read the NeuroLibre doc I wasn't informed of this limitation re: Dockerfiles and how to configure it correctly. |
Yep, I made lot of changes these days. Can you check it now to see if it is more clear ? |
@ltetrel do we have a demo repo that builds on RoboNeuro that you guys tested and validated the service on, so that I can go in and explore the directory structure inside the Docker image and see how it should look like for the Jupyter Book to work correctly? |
I'd like to avoid too many RoboNeuro builds, and am afraid that blindly debugging the Dockerfile to get the correct setting will take several tries. |
You can check this one: https://github.com/ltetrel/nha2020-nilearn |
The principal solution to this issue was resolving a bug where I was checking out a repo after cloning it, but before opening the directory. |
I then encountered another issue,
which was resolved by reverting |
After a lot of back and forth due to my repo's RoboNeuro files being locked after each build, no logs being sent to me when errors occured, and no UI messages to me on the web, and a lot of help by @ltetrel, I was finally able to successfully build a book using RoboNeuro! Before submitting officially, I'll have to do a little bit of cleanup in the book due to some of the transition needed from the old jupyter book format to the current one (the equations, figures aren't centered anymore), but that should be quick I'd say. Here's some other things of note: When my book finished building, it didn't alert me in the browser. Instead, it showed: But if I went back and built again, it said it found the book and opened it. Also, while RoboNeuro allows me to give a branch or git ref, the submission doesn't: |
Congrats for the submission!
I tried "manually" to submit your repo (calling the API via command line instead the website or RoboNeuro) and the lock file is correctly being deleted. I suspect that what happened is that when the user (you) close his session (the submission page), the API call is somehow canceled. As a result the submission building is canceled and the lock file stays on the server (so the API treat the submission as currently active). |
You pinged pypa/pip#10201 so I took a quick look at the docker file: https://github.com/qMRLab/t1_book_update/blob/neurolibre/binder/Dockerfile#L45 And I build an independent requirements file based on this in which I was able to reproduce the excessive backtracking issue:
I tested it using the optimized version of backtracking I have proposed here: pypa/pip#10479 . It quickly produced this error:
In my testing if I loosen the user nbconvert requirement from |
Following up on: neurolibre/docs.neurolibre.org#13
Kiril was able to update the T1 book to use version 0.10.x of Jupyter Book (repo, site).
I forked and converted the repo in a branch to match the format in the docs, and tried a preview submission (https://roboneuro.herokuapp.com).
I am still encountering this error, with no accompanying messages.
Any clue @agahkarakuzu ?
The text was updated successfully, but these errors were encountered: