In this paper, we propose an Inexact Proximal-indefinite Stochastic ADMM (abbreviated as IPS-ADMM) to solve a class of separable convex optimization problems whose objective functions consist of two parts: one is an average of many smooth convex functions and the other is a convex but potentially nonsmooth function. The involved smooth subproblem is tackled by an inexact accelerated stochastic gradient method based on an adaptive expansion step to avoid the scenario that the sample size can be extremely huge so that computing the objective function value or its gradient is much expensive. The involved nonsmooth subproblem is solved inexactly under a relative error criterion to avoid the case that the proximal operator is potentially unavailable. In contrast to most of deterministic and stochastic ADMM, our dual variable updates twice and allows a more flexible and larger stepsize region. By a variational analysis, we characterize the generated iterates as a variational inequality and finally establish the sublinear convergence rate of this IPS-ADMM in terms of the objective function gap and constraint violation. Experiments on solving the 3D CT reconstruction problem in medical imaging and the graph-guided fused lasso problem in machine learning show that our IPS-ADMM is very promising.
Article
View Convergence Analysis on A Data-deriven Inexact Proximal-indefinite Stochastic ADMM