Confidence Region for Distributed Stochastic Optimization Problem via Stochastic Gradient Tracking Method

Since stochastic approximation (SA) based algorithms are easy to implement and need less memory, they are very popular in distributed stochastic optimization problems. Many works have focused on the consistency of the objective values and the iterates returned by the SA based algorithms. It is of fundamental interest to know how to quantify the uncertainty associated with SA solutions via the confidence regions of a prescribed level of significance for the true solution. In this paper, we discuss the framework of constructing the asymptotic confidence regions of the optimal solution to distributed stochastic optimization problem with a focus on the distributed stochastic gradient tracking method. To attain this goal, we first present the asymptotic normality of Polyak-Ruppert averaged distributed stochastic gradient tracking method. We then estimate the corresponding covariance matrix through online estimators. Finally, we provide a practical procedure to build the asymptotic confidence regions for the optimal solution. Numerical tests are also conducted to show the efficiency of the proposed methods.

Article

Download

View Confidence Region for Distributed Stochastic Optimization Problem via Stochastic Gradient Tracking Method