The Asynchronous PALM Algorithm for Nonsmooth Nonconvex Problems

We introduce the Asynchronous PALM algorithm, a new extension of the Proximal Alternating Linearized Minimization (PALM) algorithm for solving nonconvex nonsmooth optimization problems. Like the PALM algorithm, each step of the Asynchronous PALM algorithm updates a single block of coordinates; but unlike the PALM algorithm, the Asynchronous PALM algorithm eliminates the need for sequential updates … Read more

Error bounds, quadratic growth, and linear convergence of proximal methods

We show that the the error bound property, postulating that the step lengths of the proximal gradient method linearly bound the distance to the solution set, is equivalent to a standard quadratic growth condition. We exploit this equivalence in an analysis of asymptotic linear convergence of the proximal gradient algorithm for structured problems, which lack … Read more

Gap functions for quasi-equilibria

An approach for solving quasi-equilibrium problems (QEPs) is proposed relying on gap functions, which allow reformulating QEPs as global optimization problems. The (generalized) smoothness properties of a gap function are analysed and an upper estimates of its Clarke directional derivative is given. Monotonicity assumptions on both the equilibrium and constraining bifunctions are a key tool … Read more

Nash Equilibrium in a Pay-as-bid Electricity Market: Part 2 – Best Response of a Producer

We consider a multi-leader-common-follower model of a pay-as-bid electricity market in which the producers provide the regulator with either linear or quadratic bids. We prove that for a given producer only linear bids can maximise his profit. Such linear bids are referred as the “best response” of the given producer. They are obtained assuming the … Read more

Nash Equilibrium in a Pay-as-bid Electricity Market: Part 1 – Existence and Characterisation

We consider a model of a pay-as-bid electricity market based on a multi-leader-common-follower approach where the producers as leaders are at the upper level and the regulator as a common follower is at the lower level. We fully characterise Nash equilibria for this model by describing necessary and sufficient conditions for their existence as well … Read more

Variational Analysis of the Crouzeix Ratio

Let $W(A)$ denote the field of values (numerical range) of a matrix $A$. For any polynomial $p$ and matrix $A$, define the Crouzeix ratio to have numerator $\max\left\{|p(\zeta)|:\zeta\in W(A)\right\}$ and denominator $\|p(A)\|_2$. M.~Crouzeix’s 2004 conjecture postulates that the globally minimal value of the Crouzeix ratio is $1/2$, over all polynomials $p$ of any degree and … Read more

Approximate Versions of the Alternating Direction Method of Multipliers

We present three new approximate versions of alternating direction method of multipliers (ADMM), all of which require only knowledge of subgradients of the subproblem objectives, rather than bounds on the distance to the exact subproblem solution. One version, which applies only to certain common special cases, is based on combining the operator-splitting analysis of the … Read more

Generalized Conjugate Gradient Methods for $\ell_1$ Regularized Convex Quadratic Programming with Finite Convergence

The conjugate gradient (CG) method is an efficient iterative method for solving large-scale strongly convex quadratic programming (QP). In this paper we propose some generalized CG (GCG) methods for solving the $\ell_1$-regularized (possibly not strongly) convex QP that terminate at an optimal solution in a finite number of iterations. At each iteration, our methods first … Read more

Sparse Recovery via Partial Regularization: Models, Theory and Algorithms

In the context of sparse recovery, it is known that most of existing regularizers such as $\ell_1$ suffer from some bias incurred by some leading entries (in magnitude) of the associated vector. To neutralize this bias, we propose a class of models with partial regularizers for recovering a sparse solution of a linear system. We … Read more

First and second order optimality conditions for piecewise smooth objective functions

Any piecewise smooth function that is specified by an evaluation procedures involving smooth elemental functions and piecewise linear functions like min and max can be represented in the so-called abs-normal form. By an extension of algorithmic, or automatic differentiation, one can then compute certain first and second order derivative vectors and matrices that represent a … Read more