Special Session 78: Special Session on Mathematics of Data Science and Applications

Stochastic Gradient Methods: Bias, Stability and Generalization
Yunwen Lei
The University of Hong Kong
Peoples Rep of China
Co-Author(s):    Shuang Zeng
Abstract:
Recent developments of stochastic optimization often suggest biased gradient estimators to improve either the robustness, communication efficiency or computational speed. Representative biased stochastic gradient methods (BSGMs) include Zeroth-order stochastic gradient descent (SGD), Clipped-SGD and SGD with delayed gradients. The practical success of BSGMs motivates a lot of convergence analysis to explain their impressive training behaviour. As a comparison, there is far less work on their generalization analysis, which is a central topic in modern machine learning. In this paper, we present the first framework to study the stability and generalization of BSGMs for convex and smooth problems. We apply our general result to develop the first stability bound for Zeroth-order SGD with reasonable step size sequences, and the first stability bound for Clipped-SGD. While our stability analysis is developed for general BSGMs, the resulting stability bounds for both Zeroth-order SGD and Clipped-SGD match those of SGD under appropriate smoothing/clipping parameters. Furthermore, our stability analysis incorporates the training errors into the stability bounds and therefore can benefit from low noise conditions.