Skip to content

Enable Visualizations for Dev Container#3523

Merged
ruffsl merged 38 commits intoros-navigation:mainfrom
ruffsl:devcontainer-visualization
Apr 26, 2023
Merged

Enable Visualizations for Dev Container#3523
ruffsl merged 38 commits intoros-navigation:mainfrom
ruffsl:devcontainer-visualization

Conversation

@ruffsl ruffsl force-pushed the devcontainer-visualization branch from 169a8b6 to 71f5a59 Compare April 3, 2023 14:11
@ruffsl ruffsl added the devcontainer Dev Container label Apr 3, 2023
Comment thread Dockerfile
FROM dever AS visualizer

# install foxglove dependacies
RUN echo "deb https://dl.cloudsmith.io/public/caddy/stable/deb/debian any-version main" > /etc/apt/sources.list.d/caddy-stable.list
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe we should have a separate docker container in .devcontainer for this? I'd like to keep the root docker file as straight forward as possible for use as the base of other people's systems.

Copy link
Copy Markdown
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@ruffsl
Copy link
Copy Markdown
Member Author

ruffsl commented Apr 7, 2023

In continuing to use these PRs as a sudo dev blog, tagged via devcontainer, to document my progress, I'll write down my thoughts so far:

Initially I had intended on incorporating Docker Compose into the dev container configuration, to spin out visualization components as services, as there is a rather nice separation of concerns and interfaces. However, there are a number of pros and cons to such an approach, which I'll document below.

The nice thing about using compose with dev containers is that it would allow for a more modular approach to the development environment, allowing the visualization components to be self-contained, and developed or used independently. Using separate containers would also allow for the dependencies of each to be isolated at build and runtime, avoiding the need to add to the base image for the main development container.

However, there are a long number of caveats in using docker compose with dev containers, largely due to the additional complexity of docker compose itself and incorporating into the dev container workflow.

  • Install Setup
  • Lifecycle Management
    • Currently the visualization components are rather stateful, as when gzserver is restarted, the bridges for gzweb or foxglove do not gracefully reconnect. Thus these bridges need to be frequently restarted as well.
    • This is simple enough to do with tasks in VSCode, when such processes are co-located in the same container, but becomes more involved when using docker-compose CLI or docker socket APIs.
  • Integration Complexity
    • To even manage the lifecycle of containers from docker compose, the docker socket must also be granted to the dev container. This is a rather significant resource to grant, and could lead to unintended outcomes for users. E.g. accidentally restarting the main dev container when attempting to only restart peripheral visualization services.
  • Abstract Indirection
    • Using compose can expose a number of footguns, where the abstraction layers across devcontainer.json, docker-compose.yaml, and docker buildx generates a lot of indirection, resulting in quirky behaviors, unintuitive outcomes, and cryptic errors
    • E.g. docker-compose by default prepends the project’s name to all resources it initializes, including containers, volumes, networks, ect. Thus, if a user attempts to specify a volume to mount, named ccache, compose will create/mount the volume name <COMPOSE_PROJECT_NAME>_ccache, or navigation2_devcontainer_ccache in my case.
    • While this helpfully avoids collision across volumes and other project resources, it may not be immediately obvious to the user why an existing volume with the exact name specified was not mounted instead, or why names of things change based on the folder the git repo is cloned into.
  • Consistently
    • While splitting out the visualization into separate containers allows for the use of separate images, it would be harder to keep the image layers from various stages consistent and shared to reduce disk usage, or ensure compatibility across various bridge versions and network transports.
    • Sharing image layers helps reduce any disk usage, which is handy for local development with limited free disk space, but also economical for users of Codespaces, as GB/Month usage cost is based on the total disk usage of all images.
  • Message Interfaces
    • The network based bridge used by Foxglove needs access to the message interface definitions to properly connect and forward network traffic. Nav2 includes a number of custom message types that foxglove_bridge must resolve, which can be done from sourcing the overlay.
    • While this could be done by building just the message packages for the bridge container image used by such a compose service, message packages change relatively infrequently, keeping them consistent and in sync with the rest of the project development would be tedious and error prone.
  • Simulation Assets
    • The gazebo web client needs access to all the respective models and models used in simulation to properly visualize the entire world scene and robots. These models are currently bundled and installed with existing ROS packages, as opposed to some remote model database, like models.gazebosim.org or app.gazebosim.org
    • While these assets could be copied and relocated into a mounted volume shared by both the dev container and gzweb container, again keeping this consistent over development cycles and package updates would not be as simple as a monolithic approach.

As for separate Dockerfiles: while we could split out these stages into separate Dockerfiles, this would complicate our current docker build process, given that such an image could only build FROM stages exported to an image tag. This would require multiple invocations of the docker build command, and would not have the same clear build cache behavior as the current approach, nor would that be as straightforward to implement using the devcontainer config.

As a compromise, I've rearranged the stages to keep the linearity of the nav2 build and test process by moving the visualization stages to the end of the Dockerfile. Yet, I've also added a final stage, that is exported by default, to maintain the same docker build behavior as before. Pointing the devcontainer.json to a stage within this single Dockerfile allows the users to build from the inline cache provided from our CI images, while still allowing them to bust that cache when they need to locally modify earlier stages or layers in the Dockerfile build. If the user wants to avoid installing the additional dependencies for visualization, they can simply update the devcontainer.json to point to a different stage.

These are things we could probably iterate over in the future, as tooling improves, but for now I'll just keep it as simple as I can.

@SteveMacenski
Copy link
Copy Markdown
Member

SteveMacenski commented Apr 11, 2023

As for separate Dockerfiles: while we could split out these stages into separate Dockerfile

I'm not actually suggesting multiple stage dockerfiles (unless you felt it was best) but rather separating the root dockerfile for basic local use from a more complex one meant for codespaces. You have a .devcontainer directory meant for that sort of asset I think would be logical to have the dockerfile for that contained in. There's nothing preventing us from having a couple of dockerfiles for different purposes with slight differences.

So complete different dockerfiles for different purposes with different "stuff" in it.

@mergify
Copy link
Copy Markdown
Contributor

mergify Bot commented Apr 11, 2023

This pull request is in conflict. Could you fix it @ruffsl?

ruffsl added 22 commits April 11, 2023 21:50
to install demo dependencies
that fixes issues with deploy.sh
- osrf/gzweb#248
as migrating the python3 scripts still hasn't resolved the issue
by keeping sequential builder and tester stages adjacent
while keeping tester stage the default exported target
for bridge and studio
to avoid nagging the user to select one
as currently none really support our use case
to ensure nav2 message types are defined
by inlining all args into command
and sourcing workspace setup
to avoid host/platform specific x11 quirks
exposed by vscode automatic x11 forwarding

This is needed to provide gazebo a virtual frame buffer
as it still need one after all these years.
This also avoids the need modifying launch files to call xvfb-run

- microsoft/vscode-remote-release#8031
- gazebosim/gazebo-classic#1602
@SteveMacenski
Copy link
Copy Markdown
Member

Understandable, but I also don't think adding in alot of bloat like gzweb and foxglove is a good trade off to apply to every user considering foxglove is close, but does not yet have everything needed to replace rviz as an option. Exploding docker image sizes is a real concern if folks are using that dockerfile for generating their own images.

or cross talk between containers by default
also avoids interfering with vscode's X11 forwarding
@mergify
Copy link
Copy Markdown
Contributor

mergify Bot commented Apr 18, 2023

@ruffsl, your PR has failed to build. Please check CI outputs and resolve issues.
You may need to rebase or pull in main due to API changes (or your contribution genuinely fails).

@ruffsl
Copy link
Copy Markdown
Member Author

ruffsl commented Apr 18, 2023

adding in alot of bloat like gzweb and foxglove is a good trade off

Yet, it's not a trade off they have to make. In fact, they'd have to opt into it, by explicitly targeting that stage: a6d9531

docker build -t navigation2:rolling --target=visualizer .

Without targeting that stage, neither will the dever nore the visualizer stage be exported to the image, let alone be built.

It may help to check the Docker documentation to get a better feel for how Dockerfile directives map to Docker image layers:

@ruffsl
Copy link
Copy Markdown
Member Author

ruffsl commented Apr 18, 2023