A Double Smoothing Technique for Constrained Convex Optimization Problems and Applications to Optimal Control

In this paper, we propose an efficient approach for solving a class of convex optimization problems in Hilbert spaces. Our feasible region is a (possibly infinite-dimensional) simple convex set, i.e. we assume that projections on this set are computationally easy to compute. The problem we consider is the minimization of a convex function over this region under the additional constraint $Au \in T$, where $A$ is a linear operator and $T$ is a (finite-dimensional) convex set whose dimension is small as compared to the dimension of the feasible region. In our approach, we dualize the linear constraints, solve the resulting dual problem with a purely dual gradient method and show how to reconstruct an approximate primal solution. In order to accelerate our scheme, we introduce a novel double smoothing technique that involves regularization of the dual problem to allow the use of a fast gradient method. As a result, we obtain a method with complexity $O({1 \over \epsilon} \ln {1 \over \epsilon})$ gradient iterations, where $\epsilon$ is the desired accuracy for the primal-dual solution. Our approach covers, in particular, optimal control problems with trajectory governed by a system of linear differential equations, where the additional constraints can for example force the trajectory to visit some convex sets at certain moments of time.

Citation

CORE Discussion paper 2010/34 (updated version), Center for Operations Research and Econometrics (CORE), Universite catholique de Louvain (UCL), Belgium. Modified version accepted for publication in SIAM Journal of Optimization under the title: 'DOUBLE SMOOTHING TECHNIQUE FOR LARGE-SCALE LINEARLY CONSTRAINED CONVEX OPTIMIZATION'

Article

Download

View PDF