Special Session 46: Theory, Numerical methods, and Applications of Partial Differential Equations

Robust Image Denoising through Out-of-Distribution Typical Set Sampling
Yao Li
Harbin Institute of Technology
Peoples Rep of China
Co-Author(s):    Jie Ning, Jiebao Sun, Shengzhu Shi, Zhichang Guo, and Yao Li
Abstract:
Deep learning-based image denoising models demonstrate remarkable performance, but their lack of robustness analysis remains a significant concern. A major issue is that these models are susceptible to adversarial attacks, where small, carefully crafted perturbations to input data can cause them to fail. Surprisingly, perturbations specifically crafted for one model can easily transfer across various models, including CNNs, Transformers, unfolding models, and plug\&play models, leading to failures in those models as well. Such high adversarial transferability is not observed in classification models. We analyze the possible underlying reasons behind the high adversarial transferability through a series of hypotheses and validation experiments. By characterizing the manifolds of Gaussian noise and adversarial perturbations using the concepts of the typical set and the asymptotic equipartition property, we prove that adversarial samples deviate slightly from the typical set of the original input distribution, causing the models to fail. Based on these insights, we propose a novel adversarial defense method: the Out-of-Distribution Typical Set Sampling Training strategy (TS). TS not only significantly enhances the model`s robustness but also marginally improves denoising performance compared to the original model.