Sphere2Vec: A General-Purpose Location Representation Learning over a Spherical Surface for Large-Scale Geospatial Predictions
Code for recreating the results in our Sphere2Vec paper to be appeared at ISPRS Journal of Photogrammetry and Remote Sensing.
Please visit my Homepage for more information.
- Python 3.7+
- Torch 1.7.1+
- Other required packages are summarized in
main/requirements.txt.
The main code are located in main folder
run_sphere2vec.shis used to train and evaluate any location encoder we describe in the paper.
- The species fine-grained recognition dataset can be downloaded from this website.
- All training dataset should be downloaded to
./geo_prior_data/folder. - Please structure all the dataset in a way shown in
./main/path.py.
The codebase uses different location encoder model names from the name we use in the paper. Here, we list the correspondence between them.
| Model Names in the Paper | Model Names in the Code |
|---|---|
| xyz | xyz |
| wrap | geo_net |
| wrap + ffn | geo_net_fft |
| rbf | rbf |
| rff | rff |
| Space2Vec-grid | gridcell |
| Space2Vec-theory | theory |
| NeRF | nerf |
| Sphere2Vec-sphereC | sphere |
| Sphere2Vec-sphereC+ | spheregrid |
| Sphere2Vec-sphereM | spheremixscale |
| Sphere2Vec-sphereM+ | spheregridmixscale |
| Sphere2Vec-dfs | dft |
Comparison of the predicted spatial distributions of example species in the iNat2018 dataset from different location encoders
Comparison of the predicted spatial distributions of example land use types in the fMoW dataset from different location encoders
If you find our work useful in your research please consider citing our ISPRS PHOTO 2023 paper.
@article{mai2023sphere2vec,
title={Sphere2Vec: A General-Purpose Location Representation Learning over a Spherical Surface for Large-Scale Geospatial Predictions},
author={Mai, Gengchen and Xuan, Yao and Zuo, Wenyun and He, Yutong and Song, Jiaming and Ermon, Stefano and Janowicz, Krzysztof and Lao, Ni},
journal={ISPRS Journal of Photogrammetry and Remote Sensing},
year={2023},
vol={202},
pages={439-462},
publisher={Elsevier}
}
If you use grid location encoder, please also cite our ICLR 2020 paper and our IJGIS 2022 paper:
@inproceedings{mai2020space2vec,
title={Multi-Scale Representation Learning for Spatial Feature Distributions using Grid Cells},
author={Mai, Gengchen and Janowicz, Krzysztof and Yan, Bo and Zhu, Rui and Cai, Ling and Lao, Ni},
booktitle={International Conference on Learning Representations},
year={2020},
organization={openreview}
}
@article{mai2022review,
title={A review of location encoding for GeoAI: methods and applications},
author={Mai, Gengchen and Janowicz, Krzysztof and Hu, Yingjie and Gao, Song and Yan, Bo and Zhu, Rui and Cai, Ling and Lao, Ni},
journal={International Journal of Geographical Information Science},
volume={36},
number={4},
pages={639--673},
year={2022},
publisher={Taylor \& Francis}
}
If you use the unsupervised learning function, please also cite our ICML 2023 paper. Please refer to our CSP webite for more detailed information.
@inproceedings{mai2023csp,
title={CSP: Self-Supervised Contrastive Spatial Pre-Training for Geospatial-Visual Representations},
author={Mai, Gengchen and Lao, Ni and He, Yutong and Song, Jiaming and Ermon, Stefano},
booktitle={International Conference on Machine Learning},
year={2023},
organization={PMLR}
}


