Since stochastic approximation (SA) based algorithms are easy to implement and need less memory, they are very popular in distributed stochastic optimization problems. Many works have focused on the consistency of the objective values and the iterates returned by the SA based algorithms. It is of fundamental interest how to quantify the uncertainty associated with SA solutions via the confidence regions of prescribed level of significance for the true solution. In this paper, we discuss the framework of constructing the asymptotic confidence regions of the optimal solution to distributed stochastic optimization problem with a focus on distributed stochastic gradient tracking method. To attain this goal, we first present a central limit theorem for Polyak-Ruppert averaged distributed stochastic gradient tracking method. We then estimate the corresponding covariance matrix through online estimators. Finally, we provide a practical procedure to build the asymptotically confidence regions for the optimal solution. Numerical tests are also conducted to show the efficiency of the proposed methods.
View Confidence Region for Distributed Stochastic Optimization Problem in Stochastic Gradient Tracking Method