Generalized ADMM with Optimal Inde nite Proximal Term for Linearly Constrained Convex Optimization

We consider the generalized alternating direction method of multipliers (ADMM) for linearly constrained convex optimization. Many problems derived from practical applications have showed that usually one of the subproblems in the generalized ADMM is hard to solve, thus a special proximal term is added. In the literature, the proximal term can be inde nite which plays … Read more

Relative-Continuity” for Non-Lipschitz Non-Smooth Convex Optimization using Stochastic (or Deterministic) Mirror Descent

The usual approach to developing and analyzing first-order methods for non-smooth (stochastic or deterministic) convex optimization assumes that the objective function is uniformly Lipschitz continuous with parameter $M_f$. However, in many settings the non-differentiable convex function $f(\cdot)$ is not uniformly Lipschitz continuous — for example (i) the classical support vector machine (SVM) problem, (ii) the … Read more

CONVERGENCE RATE OF GRADIENT BASED ADAPTIVE RESTART FOR ACCELERATED GRADIENT SCHEMES

The accelerated gradient algorithm is known to have non-monotonic, periodic convergence behavior in the high momentum regime. If important function parameters like the condition number are known, the momentum can be adjusted to get linear convergence. Unfortunately these parameters are usually not accessible, so instead heuristics are used for deciding when to restart. One of … Read more

Response to “Counterexample to global convergence of DSOS and SDSOS hierarchies”

In a recent note [8], the author provides a counterexample to the global convergence of what his work refers to as “the DSOS and SDSOS hierarchies” for polynomial optimization problems (POPs) and purports that this refutes claims in our extended abstract [4] and slides in [3]. The goal of this paper is to clarify that … Read more

Derivative-Free Robust Optimization by Outer Approximations

We develop an algorithm for minimax problems that arise in robust optimization in the absence of objective function derivatives. The algorithm utilizes an extension of methods for inexact outer approximation in sampling a potentially infinite-cardinality uncertainty set. Clarke stationarity of the algorithm output is established alongside desirable features of the model-based trust-region subproblems encountered. We … Read more

On the Construction of Converging Hierarchies for Polynomial Optimization Based on Certificates of Global Positivity

In recent years, techniques based on convex optimization and real algebra that produce converging hierarchies of lower bounds for polynomial minimization problems have gained much popularity. At their heart, these hierarchies rely crucially on Positivstellens\”atze from the late 20th century (e.g., due to Stengle, Putinar, or Schm\”udgen) that certify positivity of a polynomial on an … Read more

Improving Efficiency and Scalability of Sum of Squares Optimization: Recent Advances and Limitations

It is well-known that any sum of squares (SOS) program can be cast as a semidefinite program (SDP) of a particular structure and that therein lies the computational bottleneck for SOS programs, as the SDPs generated by this procedure are large and costly to solve when the polynomials involved in the SOS programs have a … Read more

On the equivalence of the primal-dual hybrid gradient method and Douglas-Rachford splitting

The primal-dual hybrid gradient (PDHG) algorithm proposed by Esser, Zhang, and Chan, and by Pock, Cremers, Bischof, and Chambolle is known to include as a special case the Douglas-Rachford splitting algorithm for minimizing the sum of two convex functions. We show that, conversely, the PDHG algorithm can be viewed as a special case of the … Read more

Uniqueness and Multiplicity of Market Equilibria on DC Power Flow Networks

We consider uniqueness and multiplicity of market equilibria in a short-run setup where traded quantities of electricity are transported through a capacitated network in which power flows have to satisfy the classical lossless DC approximation. The firms face fluctuating demand and decide on their production, which is constrained by given capacities. Today, uniqueness of such … Read more

Balancing Communication and Computation in Distributed Optimization

Methods for distributed optimization have received significant attention in recent years owing to their wide applicability in various domains including machine learning, robotics and sensor networks. A distributed optimization method typically consists of two key components: communication and computation. More specifically, at every iteration (or every several iterations) of a distributed algorithm, each node in … Read more