Accelerated gradient sliding for structured convex optimization

Our main goal in this paper is to show that one can skip gradient computations for gradient descent type methods applied to certain structured convex programming (CP) problems. To this end, we first present an accelerated gradient sliding (AGS) method for minimizing the summation of two smooth convex functions with different Lipschitz constants. We show … Read more

Improving an ADMM-like Splitting Method via Positive-Indefinite Proximal Regularization for Three-Block Separable Convex Minimization

The augmented Lagrangian method (ALM) is fundamental for solving convex minimization models with linear constraints. When the objective function is separable such that it can be represented as the sum of more than one function without coupled variables, various splitting versions of the ALM have been well studied in the literature such as the alternating … Read more

Constructing New Weighted l1-Algorithms for the Sparsest Points of Polyhedral Sets

The l0-minimization problem that seeks the sparsest point of a polyhedral set is a longstanding challenging problem in the fields of signal and image processing, numerical linear algebra and mathematical optimization. The weighted l1-method is one of the most plausible methods for solving this problem. In this paper, we develop a new weighted l1-method through … Read more

Linearized Alternating Direction Method of Multipliers via Positive-Indefinite Proximal Regularization for Convex Programming

The alternating direction method of multipliers (ADMM) is being widely used for various convex minimization models with separable structures arising in a variety of areas. In the literature, the proximalversion of ADMM which allows ADMM’s subproblems to be proximally regularized has been well studied. Particularly the linearized version of ADMM can be yielded when the … Read more

Faster Alternating Direction Method of Multipliers with a Worst-case O(1/n^2) Convergence Rate

The alternating direction method of multipliers (ADMM) is being widely used for various convex programming models with separable structures arising in specifically many scientific computing areas. The ADMM’s worst-case O(1/n) convergence rate measured by the iteration complexity has been established in the literature when its penalty parameter is a constant, where n is the iteration … Read more

Complexity bounds for primal-dual methods minimizing the model of objective function

We provide Frank-Wolfe ($\equiv$ Conditional Gradients) method with a convergence analysis allowing to approach a primal-dual solution of convex optimization problem with composite objective function. Additional properties of complementary part of the objective (strong convexity) significantly accelerate the scheme. We also justify a new variant of this method, which can be seen as a trust-region … Read more

Multiple cuts in separating plane algorithms

This paper presents an extended version of the separation plane algorithms for subgradient-based finite-dimensional nondifferentiable convex blackbox optimization. The extension introduces additional cuts for epigraph of the conjugate of objective function which improve the convergence of the algorithm. The case of affine cuts is considered in more details and it is shown that it requires … Read more

Frechet inequalities via convex optimization

Quantifying the risk carried by an aggregate position $S_d\defn\sum_{i=1}^d X_i$ comprising many risk factors $X_i$ is fundamental to both insurance and financial risk management. Frechet inequalities quantify the worst-case risk carried by the aggregate position given distributional information concerning its composing factors but without assuming independence. This marginal factor modeling of the aggregate position in … Read more

A Simplified Form of Block-Iterative Operator Splitting, and an Asynchronous Algorithm Resembling the Multi-Block ADMM

This paper develops what is essentially a simplified version of the block-iterative operator splitting method already proposed by the author and P. Combettes, but with more general initialization conditions. It then describes one way of implementing this algorithm asynchronously under a computing model inspired by modern HPC environments, which consist of interconnected nodes each having … Read more

Efficient Subgradient Methods for General Convex Optimization

A subgradient method is presented for solving general convex optimization problems, the main requirement being that a strictly-feasible point is known. A feasible sequence of iterates is generated, which converges to within user-specified error of optimality. Feasibility is maintained with a line-search at each iteration, avoiding the need for orthogonal projections onto the feasible region … Read more