Skip to content

SunshengGu/XC_eval_pcdet

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Explanation Concentration (XC)

Explanation Concentration (XC) measures the concentration of attributions within a predicted object.

Before going through the instructions on running the XC experiments, make sure you are familiar with OpenPCDet's original repo first. At least learn how to train and test a model, know where the resulting model and evaluation results are stored, and know where the model config files are.

PointPillars models used for experiments

Trained on KITTI

kitti_model1

kitti_model2

kitti_model3

Trained on Waymo

waymo_model

Obtaining box-wise features from a model

You may skip this section if you want to just use the provided csv files to reproduce evaluation metrics presented in the paper.

Obtaining XC values and top class score on a validation set

  • If you haven't done so, create the following path in your local repo: output/kitti_models/pointpillar_xai/.
  • Go to tools/attr_experiment.py.
  • Follow instructions in comment, indicate the method, attr_shown, and aggre_method variables.
  • Depending on how you set these variables, the XC values would be coming from Integrated Gradients (IG) or backprop, positive or negative attributions, obtained by summing or counting.
  • Specify the model_cfg_file variable, this is where you stored the config file for the model.
  • Specify the model_ckpt variable, this is where you stored the model checkpoint file you downloaded.
  • If you do not wish to compute XC for all predicted boxes on the dataset, then you can set check_all to False and set num_batches to the number of frames you want to run the script on. Otherwise, leave them as default values.
  • cd tools
  • python attr_experiments.py
  • Then you should see a new csv file being created in output/kitti_models/pointpillar_xai/default
  • It contains the frame id (batch), tp/fp label (tp/fp), object class label (pred_label), top class score (pred_score), and the XC value (xc) for each predicted box.
  • The process for obtaining values from a model trained on Waymo is the same, except that the csv file will be placed in output/waymo_models/<model_config_name>/default

Obtaining number of points and distance to sensor on a validation set

  • Go to tools/get_pts_dist.py.
  • Specify the model_ckpt variable, this is where you stored the model checkpoint file you downloaded.
  • Specify the model_cfg_file variable, this is where you stored the config file for the model.
  • cd tools
  • python attr_experiments.py
  • Then you should see a new csv file being created in output/kitti_models/pointpillar_xai/default
  • It contains the frame id (batch), tp/fp label (tp/fp), object class label (pred_label), top class score (pred_score), the distance to sensor (dist), and number of points (pts).

Separating the TP and FP instances

  • If you haven't done so, create a folder named XAI_results within the tools folder.
  • In the XAI_results folder, create a subdirectory and put a previously generated csv file there (the one containing top class socre, class label, xc, number of points etc. for prediction boxes).
  • Go to tools/split_tp_fp.py.
  • Specify the source_dir (directory where you stored the csv file) and file_name (csv file name) variables.
  • Redefined the field_names and data_dict variable to match field names in the csv file.
  • python split_tp_fp.py
  • You should now have the TP and FP data split into 2 csv files.

CSV files with box-wise features

The csv files containing predicted label, top class score, 4 XC scores from backprop, number of points, distance to sensor for each of the predicted box generated by the PointPillars models trained on KITTI are available here:

kitti_model1_csv

kitti_model2_csv

kitti_model3_csv

  • In each link, you will see the data for TP and FP samples are separated.
  • If you haven't done so, create a folder named XAI_results within the tools folder.
  • In the XAI_results folder, create a subdirectory and put the files in one of the above links in it.

The XC scores for IG and all XC scores generated from Waymo were scattered across multiple files, we will make them available after we organize them better.

Generating histograms and CDF plots for TP and FP XC values

  • Go to tools/xc_hist.py
  • Change XC_term to one of the 4 XC scores, make sure it matches the name in the XC file header.
  • python xc_hist.py --XC_path where\you\saved\theCSVfiles\containing\tp_and_fp\data --pick_class <class_id>
  • The class id can be 0-car, 1-pedestrian, 2-cyclist. For Waymo, 0 represents the vehicle class.
  • Then the histogram for the desired class will be generated in a new folder in XAI_results.

Evaluate box-wise features for TP vs. FP classification

  • If you haven't done so, create a folder named XAI_results within the tools folder.
  • In the XAI_results folder, create a subdirectory
  • Go to tools/xc_eval_simple.py.
  • Specify dist_n_pts, XC_only, skip_xc as needed, following instructions in comments.
  • Specify dataset_name as "KITTI" or "WAYMO".
  • python xc_eval_simple.py --XC_path where\you\saved\theCSVfiles\containing\tp_and_fp\data.
  • You should see a new folder created within the XAI_results directory, it will contain AUROC, AUPR, AUPR_op of the specific features evaluated on each object class.

Combining top class score with XC scores

  • Go to binary_cls.py.
  • Specify the interested_class.
  • Go to the MLP class and modify the number of input features if necessary.
  • Change the n_features to match the number of features you plan to use.
  • For the new_df variable, specify the name of features you plan to use.
  • python binary_cls.py --XQ_path where\you\saved\theCSVfiles\containing\tp_and_fp\data.
  • You should see results (accuracy, AUROC, AUPR, AUPR_op) printed in the terminal.

Training a model with attribution-based loss terms

  • This portion of my work is not included in the NeurIPS workshop paper, but is described in details in my thesis. I will link my thesis here once it has been fully reviewed and revised. Below are instructions for training with attribution-based loss.
  • Below are instructions for training with attribution-based losses.
  • First, navigate to the tools folder.
  • Example command: python train_attr.py --cfg_file cfgs/kitti_models/pointpillar_xai.yaml --attr_loss xc --attr_method Saliency --aggre_method sum --attr_sign positive --box_selection bottom --xc_goal lower --batch_size 4 --epochs 80 --ckpt_save_interval 5 --extra_tag oct29_lower_xc_bottom_epoch_80_lambda_pt2
  • --cfg_file: Model configuration file.
  • --attr_loss: The measure for which we compute the attribution-based loss from, can be xc or pap (pixel attribution prior).
  • --attr_method: The specific XAI method we use to generate the attributions, can be Saliency (backprop) or IntegratedGradients.
  • --aggre_method: How we aggregate the attribution, either by summing (sum) or by counting (cnt). See the paper for more details.
  • --attr_sign: positive or negative, indicates if we are computing XC or PAP based on the positive or negative attributions.
  • --xc_goal: If we are computing XC-based loss, then this term indicates if the specific loss function will reduce (lower) or increase (higher) the XC scores of the selected predicitons.
  • For the remaining arguments, please refer to comments in tools\train_attr.py
  • If you want to adjust the lambda values for the attribution-based loss terms, go to tools/train_utils/train_utils_new_loss.py, int the train_one_epoch function, modify the value of lambda_xc and lambda_pap as you see fit.
  • Also, in your local Captum repo, you need to go to captum/captum/_utils/gradient.py and in the compute_gradients function, change grads = torch.autograd.grad(torch.unbind(outputs), inputs) to grads = torch.autograd.grad(torch.unbind(outputs), inputs, create_graph=True). This allows the computation of second order gradients, which is crucial for the computation of graidents of XC/PAP (derived from attributions, which are gradient values themselves) with respect to model weights.

The following is the README.md from OpenPCDet:

OpenPCDet

OpenPCDet is a clear, simple, self-contained open source project for LiDAR-based 3D object detection.

It is also the official code release of [PointRCNN], [Part-A^2 net] and [PV-RCNN].

Overview

Changelog

[2020-08-10] NEW: Bugfixed: The provided NuScenes models have been updated to fix the loading bugs. Please redownload it if you need to use the pretrained NuScenes models.

[2020-07-30] NEW: OpenPCDet v0.3.0 is released with the following features:

[2020-07-17] Add simple visualization codes and a quick demo to test with custom data.

[2020-06-24] OpenPCDet v0.2.0 is released with pretty new structures to support more models and datasets.

[2020-03-16] OpenPCDet v0.1.0 is released.

Introduction

What does OpenPCDet toolbox do?

Note that we have upgrated PCDet from v0.1 to v0.2 with pretty new structures to support various datasets and models.

OpenPCDet is a general PyTorch-based codebase for 3D object detection from point cloud. It currently supports multiple state-of-the-art 3D object detection methods with highly refactored codes for both one-stage and two-stage 3D detection frameworks.

Based on OpenPCDet toolbox, we win the Waymo Open Dataset challenge in 3D Detection, 3D Tracking, Domain Adaptation three tracks among all LiDAR-only methods, and the Waymo related models will be released to OpenPCDet soon.

We are actively updating this repo currently, and more datasets and models will be supported soon. Contributions are also welcomed.

OpenPCDet design pattern

  • Data-Model separation with unified point cloud coordinate for easily extending to custom datasets:

  • Unified 3D box definition: (x, y, z, dx, dy, dz, heading).

  • Flexible and clear model structure to easily support various 3D detection models:

  • Support various models within one framework as:

Currently Supported Features

  • Support both one-stage and two-stage 3D object detection frameworks
  • Support distributed training & testing with multiple GPUs and multiple machines
  • Support multiple heads on different scales to detect different classes
  • Support stacked version set abstraction to encode various number of points in different scenes
  • Support Adaptive Training Sample Selection (ATSS) for target assignment
  • Support RoI-aware point cloud pooling & RoI-grid point cloud pooling
  • Support GPU version 3D IoU calculation and rotated NMS

Model Zoo

KITTI 3D Object Detection Baselines

Selected supported methods are shown in the below table. The results are the 3D detection performance of moderate difficulty on the val set of KITTI dataset.

  • All models are trained with 8 GTX 1080Ti GPUs and are available for download.
  • The training time is measured with 8 TITAN XP GPUs and PyTorch 1.5.
training time Car Pedestrian Cyclist download
PointPillar ~1.2 hours 77.28 52.29 62.68 model-18M
SECOND ~1.7 hours 78.62 52.98 67.15 model-20M
PointRCNN ~3 hours 78.70 54.41 72.11 model-16M
PointRCNN-IoU ~3 hours 78.75 58.32 71.34 model-16M
Part-A^2-Free ~3.8 hours 78.72 65.99 74.29 model-226M
Part-A^2-Anchor ~4.3 hours 79.40 60.05 69.90 model-244M
PV-RCNN ~5 hours 83.61 57.90 70.47 model-50M

NuScenes 3D Object Detection Baselines

All models are trained with 8 GTX 1080Ti GPUs and are available for download.

mATE mASE mAOE mAVE mAAE mAP NDS download
PointPillar-MultiHead 33.87 26.00 32.07 28.74 20.15 44.63 58.23 model-23M
SECOND-MultiHead (CBGS) 31.15 25.51 26.64 26.26 20.46 50.59 62.29 model-35M

Other datasets

More datasets are on the way.

Installation

Please refer to INSTALL.md for the installation of OpenPCDet.

Quick Demo

Please refer to DEMO.md for a quick demo to test with a pretrained model and visualize the predicted results on your custom data or the original KITTI data.

Getting Started

Please refer to GETTING_STARTED.md to learn more usage about this project.

License

OpenPCDet is released under the Apache 2.0 license.

Acknowledgement

OpenPCDet is an open source project for LiDAR-based 3D scene perception that supports multiple LiDAR-based perception models as shown above. Some parts of PCDet are learned from the official released codes of the above supported methods. We would like to thank for their proposed methods and the official implementation.

We hope that this repo could serve as a strong and flexible codebase to benefit the research community by speeding up the process of reimplementing previous works and/or developing new methods.

Citation

If you find this project useful in your research, please consider cite:

@inproceedings{shi2020pv,
  title={Pv-rcnn: Point-voxel feature set abstraction for 3d object detection},
  author={Shi, Shaoshuai and Guo, Chaoxu and Jiang, Li and Wang, Zhe and Shi, Jianping and Wang, Xiaogang and Li, Hongsheng},
  booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
  pages={10529--10538},
  year={2020}
}


@article{shi2020points,
  title={From Points to Parts: 3D Object Detection from Point Cloud with Part-aware and Part-aggregation Network},
  author={Shi, Shaoshuai and Wang, Zhe and Shi, Jianping and Wang, Xiaogang and Li, Hongsheng},
  journal={IEEE Transactions on Pattern Analysis and Machine Intelligence},
  year={2020},
  publisher={IEEE}
}

@inproceedings{shi2019pointrcnn,
  title={PointRCNN: 3d Object Progposal Generation and Detection from Point Cloud},
  author={Shi, Shaoshuai and Wang, Xiaogang and Li, Hongsheng},
  booktitle={Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition},
  pages={770--779},
  year={2019}
}

Contact

This project is currently maintained by Shaoshuai Shi (@sshaoshuai) and Chaoxu Guo (@Gus-Guo).

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published