A Primal-Dual Algorithm for Risk Minimization

In this paper, we develop an algorithm to efficiently solve risk-averse optimization problems posed in reflexive Banach space. Such problems often arise in many practical applications as, e.g., optimization problems constrained by partial differential equations with uncertain inputs. Unfortunately, for many popular risk models including the coherent risk measures, the resulting risk-averse objective function is … Read more

Local Convergence of the Method of Multipliers for Variational and Optimization Problems under the Sole Noncriticality Assumption

We present local convergence analysis of the method of multipliers for equality-constrained variational problems (in the special case of optimization, also called the augmented Lagrangian method) under the sole assumption that the dual starting point is close to a noncritical Lagrange multiplier (which is weaker than second-order sufficiency). Local superlinear convergence is established under the … Read more

Blind Source Separation using Relative Newton Method combined with Smoothing Method of Multipliers

We study a relative optimization framework for quasi-maximum likelihood blind source separation and relative Newton method as its particular instance. The structure of the Hessian allows its fast approximate inversion. In the second part we present Smoothing Method of Multipliers (SMOM) for minimization of sum of pairwise maxima of smooth functions, in particular sum of … Read more

Smoothing Method of Multipliers for Sum-Max Problems

We study nonsmooth unconstrained optimization problem, which includes sum of pairwise maxima of smooth functions. Minimum $l_1$-norm approximation is a particular case of this problem. Combining ideas Lagrange multipliers with smooth approximation of max-type function, we obtain a new kind of nonquadratic augmented Lagrangian. Our approach does not require artificial variables, and preserves sparse structure … Read more