On the Construction of Converging Hierarchies for Polynomial Optimization Based on Certificates of Global Positivity

In recent years, techniques based on convex optimization and real algebra that produce converging hierarchies of lower bounds for polynomial minimization problems have gained much popularity. At their heart, these hierarchies rely crucially on Positivstellens\”atze from the late 20th century (e.g., due to Stengle, Putinar, or Schm\”udgen) that certify positivity of a polynomial on an … Read more

Improving Efficiency and Scalability of Sum of Squares Optimization: Recent Advances and Limitations

It is well-known that any sum of squares (SOS) program can be cast as a semidefinite program (SDP) of a particular structure and that therein lies the computational bottleneck for SOS programs, as the SDPs generated by this procedure are large and costly to solve when the polynomials involved in the SOS programs have a … Read more

On the equivalence of the primal-dual hybrid gradient method and Douglas-Rachford splitting

The primal-dual hybrid gradient (PDHG) algorithm proposed by Esser, Zhang, and Chan, and by Pock, Cremers, Bischof, and Chambolle is known to include as a special case the Douglas-Rachford splitting algorithm for minimizing the sum of two convex functions. We show that, conversely, the PDHG algorithm can be viewed as a special case of the … Read more

Uniqueness and Multiplicity of Market Equilibria on DC Power Flow Networks

We consider uniqueness and multiplicity of market equilibria in a short-run setup where traded quantities of electricity are transported through a capacitated network in which power flows have to satisfy the classical lossless DC approximation. The firms face fluctuating demand and decide on their production, which is constrained by given capacities. Today, uniqueness of such … Read more

Balancing Communication and Computation in Distributed Optimization

Methods for distributed optimization have received significant attention in recent years owing to their wide applicability in various domains including machine learning, robotics and sensor networks. A distributed optimization method typically consists of two key components: communication and computation. More specifically, at every iteration (or every several iterations) of a distributed algorithm, each node in … Read more

Manifold Sampling for Optimization of Nonconvex Functions that are Piecewise Linear Compositions of Smooth Components

We develop a manifold sampling algorithm for the minimization of a nonsmooth composite function $f \defined \psi + h \circ F$ when $\psi$ is smooth with known derivatives, $h$ is a known, nonsmooth, piecewise linear function, and $F$ is smooth but expensive to evaluate. The trust-region algorithm classifies points in the domain of $h$ as … Read more

On the Optimal Proximal Parameter of an ADMM-like Splitting Method for Separable Convex Programming

An ADMM-based splitting method is proposed in [11] for solving convex minimization problems with linear constraints and multi-block separable objective functions; while a relatively large proximal parameter is required for theoretically ensuring the convergence. In this paper, we further study this method and find its optimal (smallest) proximal parameter. For succinctness, we focus on the … Read more

Optimal Linearized Alternating Direction Method of Multipliers for Convex Programming

The alternating direction method of multipliers (ADMM) is being widely used in a variety of areas; its different variants tailored for different application scenarios have also been deeply researched in the literature. Among them, the linearized ADMM has received particularly wide attention from many areas because of its efficiency and easy implementation. To theoretically guarantee … Read more

Linearized version of the generalized alternating direction method of multipliers for three-block separable convex minimization problem

Recently, the generalized alternating direction method of multipliers (GADMM) proposed by Eckstein and Bertsekas has received wide attention, especially with respect to numerous applications. In this paper, we develop a new linearized version of generalized alternating direction method of multipliers (L-GADMM) for the linearly constrained separable convex programming whose objective functions are the sum of … Read more

Constraints reduction programming by subset selection: a study from numerical aspect

We consider a novel method entitled constraints reduction programming which aims to reduce the constraints in an optimization model. This method is derived from various applications of management or decision making, and has potential ability to handle a wider range of applications. Due to the high combinatorial complexity of underlying model, it is difficult to … Read more