Enable Visualizations for Dev Container#3523
Conversation
169a8b6 to
71f5a59
Compare
| FROM dever AS visualizer | ||
|
|
||
| # install foxglove dependacies | ||
| RUN echo "deb https://dl.cloudsmith.io/public/caddy/stable/deb/debian any-version main" > /etc/apt/sources.list.d/caddy-stable.list |
There was a problem hiding this comment.
Maybe we should have a separate docker container in .devcontainer for this? I'd like to keep the root docker file as straight forward as possible for use as the base of other people's systems.
35e9f91 to
1c44200
Compare
|
In continuing to use these PRs as a sudo dev blog, tagged via Initially I had intended on incorporating Docker Compose into the dev container configuration, to spin out visualization components as services, as there is a rather nice separation of concerns and interfaces. However, there are a number of pros and cons to such an approach, which I'll document below. The nice thing about using compose with dev containers is that it would allow for a more modular approach to the development environment, allowing the visualization components to be self-contained, and developed or used independently. Using separate containers would also allow for the dependencies of each to be isolated at build and runtime, avoiding the need to add to the base image for the main development container. However, there are a long number of caveats in using docker compose with dev containers, largely due to the additional complexity of docker compose itself and incorporating into the dev container workflow.
As for separate Dockerfiles: while we could split out these stages into separate Dockerfiles, this would complicate our current docker build process, given that such an image could only build As a compromise, I've rearranged the stages to keep the linearity of the nav2 build and test process by moving the visualization stages to the end of the Dockerfile. Yet, I've also added a final stage, that is exported by default, to maintain the same docker build behavior as before. Pointing the devcontainer.json to a stage within this single Dockerfile allows the users to build from the inline cache provided from our CI images, while still allowing them to bust that cache when they need to locally modify earlier stages or layers in the Dockerfile build. If the user wants to avoid installing the additional dependencies for visualization, they can simply update the devcontainer.json to point to a different stage. These are things we could probably iterate over in the future, as tooling improves, but for now I'll just keep it as simple as I can. |
I'm not actually suggesting multiple stage dockerfiles (unless you felt it was best) but rather separating the root dockerfile for basic local use from a more complex one meant for codespaces. You have a So complete different dockerfiles for different purposes with different "stuff" in it. |
|
This pull request is in conflict. Could you fix it @ruffsl? |
to install demo dependencies
located by the aws SDL model files - aws-robotics/aws-robomaker-small-warehouse-world#24
that fixes issues with deploy.sh - osrf/gzweb#248
as migrating the python3 scripts still hasn't resolved the issue
by keeping sequential builder and tester stages adjacent while keeping tester stage the default exported target
using the ROS VS Code extension - ms-iot/vscode-ros#588
for bridge and studio
using dependsOn
to avoid nagging the user to select one as currently none really support our use case
to ensure nav2 message types are defined by inlining all args into command and sourcing workspace setup
to avoid host/platform specific x11 quirks exposed by vscode automatic x11 forwarding This is needed to provide gazebo a virtual frame buffer as it still need one after all these years. This also avoids the need modifying launch files to call xvfb-run - microsoft/vscode-remote-release#8031 - gazebosim/gazebo-classic#1602
|
Understandable, but I also don't think adding in alot of bloat like gzweb and foxglove is a good trade off to apply to every user considering foxglove is close, but does not yet have everything needed to replace rviz as an option. Exploding docker image sizes is a real concern if folks are using that dockerfile for generating their own images. |
or cross talk between containers by default also avoids interfering with vscode's X11 forwarding
|
@ruffsl, your PR has failed to build. Please check CI outputs and resolve issues. |
Yet, it's not a trade off they have to make. In fact, they'd have to opt into it, by explicitly targeting that stage: a6d9531 Without targeting that stage, neither will the It may help to check the Docker documentation to get a better feel for how Dockerfile directives map to Docker image layers: |
Enable the use of visualization tools such as: