Explanation Concentration (XC) measures the concentration of attributions within a predicted object.
Before going through the instructions on running the XC experiments, make sure you are familiar with OpenPCDet's original repo first. At least learn how to train and test a model, know where the resulting model and evaluation results are stored, and know where the model config files are.
You may skip this section if you want to just use the provided csv files to reproduce evaluation metrics presented in the paper.
- If you haven't done so, create the following path in your local repo:
output/kitti_models/pointpillar_xai/. - Go to
tools/attr_experiment.py. - Follow instructions in comment, indicate the
method,attr_shown, andaggre_methodvariables. - Depending on how you set these variables, the XC values would be coming from Integrated Gradients (IG) or backprop, positive or negative attributions, obtained by summing or counting.
- Specify the
model_cfg_filevariable, this is where you stored the config file for the model. - Specify the
model_ckptvariable, this is where you stored the model checkpoint file you downloaded. - If you do not wish to compute XC for all predicted boxes on the dataset, then you can set
check_alltoFalseand setnum_batchesto the number of frames you want to run the script on. Otherwise, leave them as default values. cd toolspython attr_experiments.py- Then you should see a new
csvfile being created inoutput/kitti_models/pointpillar_xai/default - It contains the frame id (
batch), tp/fp label (tp/fp), object class label (pred_label), top class score (pred_score), and the XC value (xc) for each predicted box. - The process for obtaining values from a model trained on Waymo is the same, except that the
csvfile will be placed inoutput/waymo_models/<model_config_name>/default
- Go to
tools/get_pts_dist.py. - Specify the
model_ckptvariable, this is where you stored the model checkpoint file you downloaded. - Specify the
model_cfg_filevariable, this is where you stored the config file for the model. cd toolspython attr_experiments.py- Then you should see a new
csvfile being created inoutput/kitti_models/pointpillar_xai/default - It contains the frame id (
batch), tp/fp label (tp/fp), object class label (pred_label), top class score (pred_score), the distance to sensor (dist), and number of points (pts).
- If you haven't done so, create a folder named
XAI_resultswithin thetoolsfolder. - In the
XAI_resultsfolder, create a subdirectory and put a previously generatedcsvfile there (the one containing top class socre, class label, xc, number of points etc. for prediction boxes). - Go to
tools/split_tp_fp.py. - Specify the
source_dir(directory where you stored the csv file) andfile_name(csv file name) variables. - Redefined the
field_namesanddata_dictvariable to match field names in the csv file. python split_tp_fp.py- You should now have the TP and FP data split into 2 csv files.
The csv files containing predicted label, top class score, 4 XC scores from backprop, number of points, distance to sensor for each of the predicted box generated by the PointPillars models trained on KITTI are available here:
- In each link, you will see the data for TP and FP samples are separated.
- If you haven't done so, create a folder named
XAI_resultswithin thetoolsfolder. - In the
XAI_resultsfolder, create a subdirectory and put the files in one of the above links in it.
The XC scores for IG and all XC scores generated from Waymo were scattered across multiple files, we will make them available after we organize them better.
- Go to
tools/xc_hist.py - Change
XC_termto one of the 4 XC scores, make sure it matches the name in the XC file header. python xc_hist.py --XC_path where\you\saved\theCSVfiles\containing\tp_and_fp\data --pick_class <class_id>- The class id can be 0-car, 1-pedestrian, 2-cyclist. For Waymo, 0 represents the vehicle class.
- Then the histogram for the desired class will be generated in a new folder in
XAI_results.
- If you haven't done so, create a folder named
XAI_resultswithin thetoolsfolder. - In the
XAI_resultsfolder, create a subdirectory - Go to
tools/xc_eval_simple.py. - Specify
dist_n_pts,XC_only,skip_xcas needed, following instructions in comments. - Specify
dataset_nameas "KITTI" or "WAYMO". python xc_eval_simple.py --XC_path where\you\saved\theCSVfiles\containing\tp_and_fp\data.- You should see a new folder created within the
XAI_resultsdirectory, it will contain AUROC, AUPR, AUPR_op of the specific features evaluated on each object class.
- Go to
binary_cls.py. - Specify the
interested_class. - Go to the
MLPclass and modify the number of input features if necessary. - Change the
n_featuresto match the number of features you plan to use. - For the
new_dfvariable, specify the name of features you plan to use. python binary_cls.py --XQ_path where\you\saved\theCSVfiles\containing\tp_and_fp\data.- You should see results (accuracy, AUROC, AUPR, AUPR_op) printed in the terminal.
- This portion of my work is not included in the NeurIPS workshop paper, but is described in details in my thesis. I will link my thesis here once it has been fully reviewed and revised. Below are instructions for training with attribution-based loss.
- Below are instructions for training with attribution-based losses.
- First, navigate to the
toolsfolder. - Example command:
python train_attr.py --cfg_file cfgs/kitti_models/pointpillar_xai.yaml --attr_loss xc --attr_method Saliency --aggre_method sum --attr_sign positive --box_selection bottom --xc_goal lower --batch_size 4 --epochs 80 --ckpt_save_interval 5 --extra_tag oct29_lower_xc_bottom_epoch_80_lambda_pt2 --cfg_file: Model configuration file.--attr_loss: The measure for which we compute the attribution-based loss from, can bexcorpap(pixel attribution prior).--attr_method: The specific XAI method we use to generate the attributions, can beSaliency(backprop) orIntegratedGradients.--aggre_method: How we aggregate the attribution, either by summing (sum) or by counting (cnt). See the paper for more details.--attr_sign:positiveornegative, indicates if we are computing XC or PAP based on the positive or negative attributions.--xc_goal: If we are computing XC-based loss, then this term indicates if the specific loss function will reduce (lower) or increase (higher) the XC scores of the selected predicitons.- For the remaining arguments, please refer to comments in
tools\train_attr.py - If you want to adjust the lambda values for the attribution-based loss terms,
go to
tools/train_utils/train_utils_new_loss.py, int thetrain_one_epochfunction, modify the value oflambda_xcandlambda_papas you see fit. - Also, in your local
Captumrepo, you need to go tocaptum/captum/_utils/gradient.pyand in thecompute_gradientsfunction, changegrads = torch.autograd.grad(torch.unbind(outputs), inputs)tograds = torch.autograd.grad(torch.unbind(outputs), inputs, create_graph=True). This allows the computation of second order gradients, which is crucial for the computation of graidents of XC/PAP (derived from attributions, which are gradient values themselves) with respect to model weights.
The following is the README.md from OpenPCDet:
OpenPCDet is a clear, simple, self-contained open source project for LiDAR-based 3D object detection.
It is also the official code release of [PointRCNN], [Part-A^2 net] and [PV-RCNN].
[2020-08-10] NEW: Bugfixed: The provided NuScenes models have been updated to fix the loading bugs. Please redownload it if you need to use the pretrained NuScenes models.
[2020-07-30] NEW: OpenPCDet v0.3.0 is released with the following features:
- The Point-based and Anchor-Free models (
PointRCNN,PartA2-Free) are supported now. - The NuScenes dataset is supported with strong baseline results (
SECOND-MultiHead (CBGS)andPointPillar-MultiHead). - High efficiency than last version, support
PyTorch 1.1~1.5andspconv 1.0~1.2simultaneously.
[2020-07-17] Add simple visualization codes and a quick demo to test with custom data.
[2020-06-24] OpenPCDet v0.2.0 is released with pretty new structures to support more models and datasets.
[2020-03-16] OpenPCDet v0.1.0 is released.
Note that we have upgrated PCDet from v0.1 to v0.2 with pretty new structures to support various datasets and models.
OpenPCDet is a general PyTorch-based codebase for 3D object detection from point cloud.
It currently supports multiple state-of-the-art 3D object detection methods with highly refactored codes for both one-stage and two-stage 3D detection frameworks.
Based on OpenPCDet toolbox, we win the Waymo Open Dataset challenge in 3D Detection,
3D Tracking, Domain Adaptation
three tracks among all LiDAR-only methods, and the Waymo related models will be released to OpenPCDet soon.
We are actively updating this repo currently, and more datasets and models will be supported soon. Contributions are also welcomed.
- Data-Model separation with unified point cloud coordinate for easily extending to custom datasets:
-
Unified 3D box definition: (x, y, z, dx, dy, dz, heading).
-
Flexible and clear model structure to easily support various 3D detection models:
- Support various models within one framework as:
- Support both one-stage and two-stage 3D object detection frameworks
- Support distributed training & testing with multiple GPUs and multiple machines
- Support multiple heads on different scales to detect different classes
- Support stacked version set abstraction to encode various number of points in different scenes
- Support Adaptive Training Sample Selection (ATSS) for target assignment
- Support RoI-aware point cloud pooling & RoI-grid point cloud pooling
- Support GPU version 3D IoU calculation and rotated NMS
Selected supported methods are shown in the below table. The results are the 3D detection performance of moderate difficulty on the val set of KITTI dataset.
- All models are trained with 8 GTX 1080Ti GPUs and are available for download.
- The training time is measured with 8 TITAN XP GPUs and PyTorch 1.5.
| training time | Car | Pedestrian | Cyclist | download | |
|---|---|---|---|---|---|
| PointPillar | ~1.2 hours | 77.28 | 52.29 | 62.68 | model-18M |
| SECOND | ~1.7 hours | 78.62 | 52.98 | 67.15 | model-20M |
| PointRCNN | ~3 hours | 78.70 | 54.41 | 72.11 | model-16M |
| PointRCNN-IoU | ~3 hours | 78.75 | 58.32 | 71.34 | model-16M |
| Part-A^2-Free | ~3.8 hours | 78.72 | 65.99 | 74.29 | model-226M |
| Part-A^2-Anchor | ~4.3 hours | 79.40 | 60.05 | 69.90 | model-244M |
| PV-RCNN | ~5 hours | 83.61 | 57.90 | 70.47 | model-50M |
All models are trained with 8 GTX 1080Ti GPUs and are available for download.
| mATE | mASE | mAOE | mAVE | mAAE | mAP | NDS | download | |
|---|---|---|---|---|---|---|---|---|
| PointPillar-MultiHead | 33.87 | 26.00 | 32.07 | 28.74 | 20.15 | 44.63 | 58.23 | model-23M |
| SECOND-MultiHead (CBGS) | 31.15 | 25.51 | 26.64 | 26.26 | 20.46 | 50.59 | 62.29 | model-35M |
More datasets are on the way.
Please refer to INSTALL.md for the installation of OpenPCDet.
Please refer to DEMO.md for a quick demo to test with a pretrained model and visualize the predicted results on your custom data or the original KITTI data.
Please refer to GETTING_STARTED.md to learn more usage about this project.
OpenPCDet is released under the Apache 2.0 license.
OpenPCDet is an open source project for LiDAR-based 3D scene perception that supports multiple
LiDAR-based perception models as shown above. Some parts of PCDet are learned from the official released codes of the above supported methods.
We would like to thank for their proposed methods and the official implementation.
We hope that this repo could serve as a strong and flexible codebase to benefit the research community by speeding up the process of reimplementing previous works and/or developing new methods.
If you find this project useful in your research, please consider cite:
@inproceedings{shi2020pv,
title={Pv-rcnn: Point-voxel feature set abstraction for 3d object detection},
author={Shi, Shaoshuai and Guo, Chaoxu and Jiang, Li and Wang, Zhe and Shi, Jianping and Wang, Xiaogang and Li, Hongsheng},
booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
pages={10529--10538},
year={2020}
}
@article{shi2020points,
title={From Points to Parts: 3D Object Detection from Point Cloud with Part-aware and Part-aggregation Network},
author={Shi, Shaoshuai and Wang, Zhe and Shi, Jianping and Wang, Xiaogang and Li, Hongsheng},
journal={IEEE Transactions on Pattern Analysis and Machine Intelligence},
year={2020},
publisher={IEEE}
}
@inproceedings{shi2019pointrcnn,
title={PointRCNN: 3d Object Progposal Generation and Detection from Point Cloud},
author={Shi, Shaoshuai and Wang, Xiaogang and Li, Hongsheng},
booktitle={Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition},
pages={770--779},
year={2019}
}
This project is currently maintained by Shaoshuai Shi (@sshaoshuai) and Chaoxu Guo (@Gus-Guo).



