With more event datasets being released online, safeguarding the event dataset against unauthorized usage has become a serious concern for data owners. Unlearnable Examples are proposed to prevent the unauthorized exploitation of image datasets. However, it’s unclear how to create unlearnable asynchronous event streams to prevent event misuse. In this work, we propose the first unlearnable event stream generation method to prevent unauthorized training from event datasets. A new form of asynchronous event error-minimizing noise is proposed to perturb event streams, tricking the unauthorized model into learning embedded noise instead of realistic features. To be compatible with the sparse event, a projection strategy is presented to sparsify the noise to render our unlearnable event streams (UEvs). Extensive experiments demonstrate that our method effectively protects event data from unauthorized exploitation, while preserving their utility for legitimate use. We hope our UEvs contribute to the advancement of secure and trustworthy event dataset sharing.
Framework of our UEVs. UEVs converts an event stream to the C-channel event stack (t = C × ∆t) and then employs a surrogate model f′to generate the event error-minimizing noise (E^2MN). Subsequently, the E2MN is projected into {−0.5, 0, 0.5} and integrated with the target event stack to generate the unlearnable event stack. The final unlearnable event stream can be reconstructed from the unlearnable event stack and original event stream via the proposed retrieval strategy.Coming soon!
If you use our UEvs in your work, please cite:
@inproceedings{wang2025uevs,
title={Asynchronous Event Error-Minimizing Noise for Safeguarding Event Dataset},
author={Wang, Ruofei and Duan, Peiqi and Shi, Boxin and Wan, Renjie},
booktitle={Proc. ICCV},
year={2025}
}
