Multi-target Linear-quadratic control problem: semi-infinite interval

We consider multi-target linear-quadratic control problem on semi-infinite interval. We show that the problem can be reduced to a simple convex optimization problem on the simplex. Citation To appear in Mathematical Problems in Engineering 2012 Article Download View Multi-target Linear-quadratic control problem: semi-infinite interval

On the O(1/t) convergence rate of alternating direction method

The old alternating direction method (ADM) has found many new applications recently, and its empirical efficiency has been well illustrated in various fields. However, the estimate of ADM’s convergence rate remains a theoretical challenge for a few decades. In this note, we provide a uniform proof to show the O(1/t) convergence rate for both the … Read more

A relaxed customized proximal point algorithm for separable convex programming

The alternating direction method (ADM) is classical for solving a linearly constrained separable convex programming problem (primal problem), and it is well known that ADM is essentially the application of a concrete form of the proximal point algorithm (PPA) (more precisely, the Douglas-Rachford splitting method) to the corresponding dual problem. This paper shows that an … Read more

Generalized Forward-Backward Splitting

This paper introduces the generalized forward-backward splitting algorithm for minimizing convex functions of the form $F + \sum_{i=1}^n G_i$, where $F$ has a Lipschitz-continuous gradient and the $G_i$’s are simple in the sense that their Moreau proximity operators are easy to compute. While the forward-backward algorithm cannot deal with more than $n = 1$ non-smooth … Read more

Accelerated and Inexact forward-backward algorithms

We propose a convergence analysis of accelerated forward-backward splitting methods for minimizing composite functions, when the proximity operator is not available in closed form, and is thus computed up to a certain precision. We prove that the $1/k^2$ convergence rate for the function values can be achieved if the admissible errors are of a certain … Read more

Iteration Complexity of Randomized Block-Coordinate Descent Methods for Minimizing a Composite Function

In this paper we develop a randomized block-coordinate descent method for minimizing the sum of a smooth and a simple nonsmooth block-separable convex function and prove that it obtains an $\epsilon$-accurate solution with probability at least $1-\rho$ in at most $O(\tfrac{n}{\epsilon} \log \tfrac{1}{\rho})$ iterations, where $n$ is the number of blocks. For strongly convex functions … Read more

Implementing the simplex method as a cutting-plane method

We show that the simplex method can be interpreted as a cutting-plane method, assumed that a special pricing rule is used. This approach is motivated by the recent success of the cutting-plane method in the solution of special stochastic programming problems. We compare the classic Dantzig pricing rule and the rule that derives from the … Read more

Accelerated Linearized Bregman Method

In this paper, we propose and analyze an accelerated linearized Bregman (ALB) method for solving the basis pursuit and related sparse optimization problems. This accelerated algorithm is based on the fact that the linearized Bregman (LB) algorithm is equivalent to a gradient descent method applied to a certain dual formulation. We show that the LB … Read more

Convex relaxations of chance constrained optimization problems

In this paper we develop convex relaxations of chance constrained optimization problems in order to obtain lower bounds on the optimal value. Unlike existing statistical lower bounding techniques, our approach is designed to provide deterministic lower bounds. We show that a version of the proposed scheme leads to a tractable convex relaxation when the chance … Read more

Implementation of a block-decomposition algorithm for solving large-scale conic semidefinite programming problems

In this paper, we consider block-decomposition first-order methods for solving large-scale conic semidefinite programming problems. Several ingredients are introduced to speed-up the method in its pure form such as: an aggressive choice of stepsize for performing the extragradient step; use of scaled inner products in the primal and dual spaces; dynamic update of the scaled … Read more