A universal and structured way to derive dual optimization problem formulations

The dual problem of a convex optimization problem can be obtained in a relatively simple and structural way by using a well-known result in convex analysis, namely Fenchel’s duality theorem. This alternative way of forming a strong dual problem is the subject in this paper. We recall some standard results from convex analysis and then … Read more

On deterministic reformulations of distributionally robust joint chance constrained optimization problems

A joint chance constrained optimization problem involves multiple uncertain constraints, i.e., constraints with stochastic parameters, that are jointly required to be satisfied with probability exceeding a prespecified threshold. In a distributionally robust joint chance constrained optimization problem (DRCCP), the joint chance constraint is required to hold for all probability distributions of the stochastic parameters from … Read more

On the convergence of a regularized Jacobi algorithm for convex optimization

In this paper we consider the regularized version of the Jacobi algorithm, a block coordinate descent method for convex optimization with differentiable objective function and block-separable constraints that has been recently proposed in the literature. Under certain regularity assumptions on the objective function, this algorithm has been shown to satisfy the so-called sufficient decrease condition, … Read more

Accelerated gradient sliding for structured convex optimization

Our main goal in this paper is to show that one can skip gradient computations for gradient descent type methods applied to certain structured convex programming (CP) problems. To this end, we first present an accelerated gradient sliding (AGS) method for minimizing the summation of two smooth convex functions with different Lipschitz constants. We show … Read more

Positive-Indefinite Proximal Augmented Lagrangian Method and its Application to Full Jacobian Splitting for Multi-block Separable Convex Minimization Problems

The augmented Lagrangian method (ALM) is fundamental for solving convex programming problems with linear constraints. The proximal version of ALM, which regularizes ALM’s subproblem over the primal variable at each iteration by an additional positive-definite quadratic proximal term, has been well studied in the literature. In this paper, we show that it is not necessary … Read more

Regularized nonlinear acceleration

We describe a convergence acceleration technique for generic optimization problems. Our scheme computes estimates of the optimum from a nonlinear average of the iterates produced by any optimization method. The weights in this average are computed via a simple linear system, whose solution can be updated online. This acceleration scheme runs in parallel to the … Read more

On Unbounded Delays in Asynchronous Parallel Fixed-Point Algorithms

The need for scalable numerical solutions has motivated the development of asynchronous parallel algorithms, where a set of nodes run in parallel with little or no synchronization, thus computing with delayed information. This paper studies the convergence of the asynchronous parallel algorithm ARock under potentially unbounded delays. ARock is a general asynchronous algorithm that has … Read more

On the identification of optimal partition for semidefinite optimization

The concept of the optimal partition was originally introduced for linear optimization and linear complementarity problems and subsequently extended to semidefinite optimization. For linear optimization and sufficient linear complementarity problems, from a central solution sufficiently close to the optimal set, the optimal partition and a maximally complementary optimal solution can be identified in strongly polynomial … Read more

On max-k-sums

The max-$k$-sum of a set of real scalars is the maximum sum of a subset of size $k$, or alternatively the sum of the $k$ largest elements. We study two extensions: First, we show how to obtain smooth approximations to functions that are pointwise max-$k$-sums of smooth functions. Second, we discuss how the max-$k$-sum can … Read more

Gradient-type penalty method with inertial effects for solving constrained convex optimization problems with smooth data

We consider the problem of minimizing a smooth convex objective function subject to the set of minima of another differentiable convex function. In order to solve this problem, we propose an algorithm which combines the gradient method with a penalization technique. Moreover, we insert in our algorithm an inertial term, which is able to take … Read more