Skip to content

google-research/valan

Repository files navigation

VALAN: Vision and Language Agent Navigation

VALAN, short for Vision and Language Agent Navigation is a lightweight and scalable software framework for deep reinforcement learning based on the SEED RL architecture. The framework facilitates the development and evaluation of embodied agents for solving grounded language understanding tasks, such as Vision-and-Language Navigation and Vision-and-Dialog Navigation, in photo-realistic environments, such as Matterport3D and StreetLearn. Such tasks require agents to interpret natural language instructions/dialog to navigate in photo-realistic environments in order to achieve prescribed navigation goals. We have added a minimal set of abstractions on top of SEED RL allowing us to generalize the architecture to solve a variety of other RL problems.

This package contains the implementations of the following problems:

  • VLN task on R2R dataset in Matterport3D environment (paper)
  • NDH task on CVDN dataset in Matterport3D environment (paper)
  • SDR and VLN tasks on Touchdown dataset in StreetLearn environment (paper)

See Mehta et al. for details about our implementation for Touchdown and the data supporting it.

For a detailed description of the architecture please read Lansing et al. Please cite the paper if you use the code from this repository in your work.

Bibtex

@article{lansing2019valan,
    title={VALAN: Vision and Language Agent Navigation},
    author={Larry Lansing and Vihan Jain and Harsh Mehta and Haoshuo Huang and Eugene Ie},
    year={2019},
    eprint={1912.03241},
    archivePrefix={arXiv},
    primaryClass={cs.LG}
}

Prerequisites

Before getting started, we need two main packages to run VALAN: docker and git.

Usage

Running on local machine

  • Quick Start

    For a quick start, run the following toy example that works out of the box. It launches a training job inside a docker container, and uses Tmux to manage the learner with 3 train actors, 3 eval actors, and an eval aggregator, each of which runs separately in an asyncronous manner. The toy dataset contains only 3 R2R scans (houses) and a dozen paths, and will be copied to /tmp/valan/testdata/. Make sure you are inside directory valan/ before start. Then run the following:

    bash launch_locally_with_docker.sh R2R R2R_3scans R2R_3scans 3 3
    

    To stop an individual worker, type CTRL+C in the worker's Tmux window. To stop training, switch to the learner's window by typing CTRL+b then 0, then type CTRL+C to kill the learner. To terminate and quit the docker container, type CTRL+b then d, which will detach the container, kill all tasks inside, and remove the container.

  • Custom Training

    You can run the job with your own R2R dataset as follows:

    bash launch_locally_with_docker.sh PROBLEM TRAIN_DATA EVAL_DATA NUM_ACTORS NUM_EVAL_ACTORS
    

    Note that the full R2R dataset is quite large and has 56 scans with 14k paths. Although the training job can run with only a few actors on a single machine, it can be very slow to do so. Thus it is recommended to run the full R2R dataset or other data of similar size on a distributed platform, e.g., GCP.

Running on distributed environment (e.g., GCP)

We provide the tool and a concrete example to run VALAN on GCP with distributed learning. Note that training the AI Platform requires signing up for a GCP account and will will incurr charges for using the compute resources. See: https://cloud.google.com/ai-platform

Disclaimer

This is not an official Google product.

About

Vision and Language Agent Navigation

Resources

License

Contributing

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 4

  •  
  •  
  •  
  •