A stochastic use of the Kurdyka-Lojasiewicz property: Investigation of optimization algorithms behaviours in a non-convex differentiable framework
Résumé
Asymptotic analysis of generic stochastic algorithms often relies on descent conditions. In a convex setting, some technical shortcuts can be considered to establish asymptotic convergence guarantees of the associated scheme. However, in a non-convex setting, obtaining similar guarantees is usually more complicated, and relies on the use of the Kurdyka-Łojasiewicz (KŁ) property. While this tool has become popular in the field of deterministic optimization, it is much less widespread in the stochastic context and the few works making use of it are essentially based on trajectory-by-trajectory approaches. In this paper, we propose a new framework for using the KŁ property in a non-convex stochastic setting based on conditioning theory. We show that this framework allows for deeper asymptotic investigations on stochastic schemes verifying some generic descent conditions. We further show that our methodology can be used to prove convergence of generic stochastic gradient descent (SGD) schemes, and unifies conditions investigated in multiple articles of the literature.
Origine | Fichiers produits par l'(les) auteur(s) |
---|