
Prateek Tandon
Related Authors
Simon Labov
Lawrence Livermore National Lab
Enrico Paringit
University of the Philippines
Arye Nehorai
Washington University in St. Louis
Gang Chen
University of California, Riverside
Touradj Ebrahimi
Ecole Polytechnique Federale de Lausanne
David Norman
University of Cambridge
InterestsView All (6)
Uploads
Papers by Prateek Tandon
Gamma-ray spectrometry data is typically presumed to be created with a Poisson process, though Gaussian-based estimators are typically used to approximate the truly Poisson-distributed data. Generally this approximation suffices, but performance loss can occur when photon counts affecting a sensor are low in number for any signal component. Low photon count signal and/or noise components can occur in a variety of real world scenarios. Photon counts from the source may be low because the source is very weak or only observable from large standoff distances. Photon count rate from both source and background may be low if small sensors (with limited surface area) are used or if measurement time is limited.
Our study experiments with augmenting established anomaly detection and match filter signal component estimators with Poisson-based models. We apply estimators such as the Poisson Principal Component Analysis (Poisson PCA) and the Zero-Inflated Poisson (ZIP) models to the source detection problem and benchmark with respect to popular estimators in the literature. Finally, we apply Bayesian Aggregation to the Poisson-based estimators to aggregate evidence across multiple spatially-correlated sensor observations. Our results indicate that the use of such techniques can aid threat detection when photon counts are low in signal and/or background noise components.
Our Bayesian Aggregation (BA) framework uses simulation and non-parametric statistical techniques to learn distributions for expected Signal-to-Noise (SNR) scores of empirical background radiation measurements, either with or without simulated injection of threatening point sources. Information from multiple measurements is combined via Bayesian update rules which aggregate evidence by modeling multiple measurements as conditionally independent given the underlying hypothesis about the source. Using an optimized algebraic formulation and efficient data structures, BA is much more scalable than naive posterior distribution calculation.
Our framework supports modeling of parameters such as source intensity and source type to enable detection and characterization of faint sources of various types. We compare different methods of incorporating this information in the BA hypothesis space and benchmark our methods against existing techniques of aggregating multiple observations to detect sources.
Gamma-ray spectrometry data is typically presumed to be created with a Poisson process, though Gaussian-based estimators are typically used to approximate the truly Poisson-distributed data. Generally this approximation suffices, but performance loss can occur when photon counts affecting a sensor are low in number for any signal component. Low photon count signal and/or noise components can occur in a variety of real world scenarios. Photon counts from the source may be low because the source is very weak or only observable from large standoff distances. Photon count rate from both source and background may be low if small sensors (with limited surface area) are used or if measurement time is limited.
Our study experiments with augmenting established anomaly detection and match filter signal component estimators with Poisson-based models. We apply estimators such as the Poisson Principal Component Analysis (Poisson PCA) and the Zero-Inflated Poisson (ZIP) models to the source detection problem and benchmark with respect to popular estimators in the literature. Finally, we apply Bayesian Aggregation to the Poisson-based estimators to aggregate evidence across multiple spatially-correlated sensor observations. Our results indicate that the use of such techniques can aid threat detection when photon counts are low in signal and/or background noise components.
Our Bayesian Aggregation (BA) framework uses simulation and non-parametric statistical techniques to learn distributions for expected Signal-to-Noise (SNR) scores of empirical background radiation measurements, either with or without simulated injection of threatening point sources. Information from multiple measurements is combined via Bayesian update rules which aggregate evidence by modeling multiple measurements as conditionally independent given the underlying hypothesis about the source. Using an optimized algebraic formulation and efficient data structures, BA is much more scalable than naive posterior distribution calculation.
Our framework supports modeling of parameters such as source intensity and source type to enable detection and characterization of faint sources of various types. We compare different methods of incorporating this information in the BA hypothesis space and benchmark our methods against existing techniques of aggregating multiple observations to detect sources.