Regularized nonlinear acceleration

We describe a convergence acceleration technique for generic optimization problems. Our scheme computes estimates of the optimum from a nonlinear average of the iterates produced by any optimization method. The weights in this average are computed via a simple linear system, whose solution can be updated online. This acceleration scheme runs in parallel to the … Read more

On Unbounded Delays in Asynchronous Parallel Fixed-Point Algorithms

The need for scalable numerical solutions has motivated the development of asynchronous parallel algorithms, where a set of nodes run in parallel with little or no synchronization, thus computing with delayed information. This paper studies the convergence of the asynchronous parallel algorithm ARock under potentially unbounded delays. ARock is a general asynchronous algorithm that has … Read more

On the identification of optimal partition for semidefinite optimization

The concept of the optimal partition was originally introduced for linear optimization and linear complementarity problems and subsequently extended to semidefinite optimization. For linear optimization and sufficient linear complementarity problems, from a central solution sufficiently close to the optimal set, the optimal partition and a maximally complementary optimal solution can be identified in strongly polynomial … Read more

On max-k-sums

The max-$k$-sum of a set of real scalars is the maximum sum of a subset of size $k$, or alternatively the sum of the $k$ largest elements. We study two extensions: First, we show how to obtain smooth approximations to functions that are pointwise max-$k$-sums of smooth functions. Second, we discuss how the max-$k$-sum can … Read more

Gradient-type penalty method with inertial effects for solving constrained convex optimization problems with smooth data

We consider the problem of minimizing a smooth convex objective function subject to the set of minima of another differentiable convex function. In order to solve this problem, we propose an algorithm which combines the gradient method with a penalization technique. Moreover, we insert in our algorithm an inertial term, which is able to take … Read more

A first-order primal-dual algorithm with linesearch

The paper proposes a linesearch for the primal-dual method. Each iteration of the linesearch requires to update only the dual (or primal) variable. For many problems, in particular for regularized least squares, the linesearch does not require any additional matrix-vector multiplications. We prove convergence of the proposed method under the standard assumptions. We also show … Read more

Elementary polytopes with high lift-and-project ranks for strong positive semidefinite operators

We consider operators acting on convex subsets of the unit hypercube. These operators are used in constructing convex relaxations of combinatorial optimization problems presented as a 0,1 integer programming problem or a 0,1 polynomial optimization problem. Our focus is mostly on operators that, when expressed as a lift-and-project operator, involve the use of semidefiniteness constraints … Read more

New analysis of linear convergence of gradient-type methods via unifying error bound conditions

The subject of linear convergence of gradient-type methods on non-strongly convex optimization has been widely studied by introducing several notions as sufficient conditions. Influential examples include the error bound property, the restricted strongly convex property, the quadratic growth property, and the Kurdyka-{\L}ojasiewicz property. In this paper, we first define a group of error bound conditions … Read more

Linear Convergence of Gradient and Proximal-Gradient Methods Under the Polyak-Lojasiewicz Condition

In 1963, Polyak proposed a simple condition that is sufficient to show a global linear convergence rate for gradient descent. This condition is a special case of the Lojasiewicz inequality proposed in the same year, and it does not require strong convexity (or even convexity). In this work, we show that this much-older Polyak-Lojasiewicz (PL) … Read more

Improving an ADMM-like Splitting Method via Positive-Indefinite Proximal Regularization for Three-Block Separable Convex Minimization

The augmented Lagrangian method (ALM) is fundamental for solving convex minimization models with linear constraints. When the objective function is separable such that it can be represented as the sum of more than one function without coupled variables, various splitting versions of the ALM have been well studied in the literature such as the alternating … Read more