Tangencies and Polynomial Optimization

Given a polynomial function $f \colon \mathbb{R}^n \rightarrow \mathbb{R}$ and a unbounded basic closed semi-algebraic set $S \subset \mathbb{R}^n,$ in this paper we show that the conditions listed below are characterized exactly in terms of the so-called {\em tangency variety} of $f$ on $S$: (i) The $f$ is bounded from below on $S;$ (ii) The … Read more

Subdifferentials and SNC property of scalarization functionals with uniform level sets and applications

This paper deals with necessary conditions for minimal solutions of constrained and unconstrained optimization problems with respect to general domination sets by using a well-known nonlinear scalarization functional with uniform level sets (called Gerstewitz’ functional in the literature). The primary objective of this work is to establish revised formulas for basic and singular subdifferentials of … Read more

Active-set Newton methods and partial smoothness

Diverse optimization algorithms correctly identify, in finite time, intrinsic constraints that must be active at optimality. Analogous behavior extends beyond optimization to systems involving partly smooth operators, and in particular to variational inequalities over partly smooth sets. As in classical nonlinear programming, such active-set structure underlies the design of accelerated local algorithms of Newton type. … Read more

When a maximal angle among cones is nonobtuse

Principal angles between linear subspaces have been studied for their application to statistics, numerical linear algebra, and other areas. In 2005, Iusem and Seeger defined critical angles within a single convex cone as an extension of antipodality in a compact set. Then, in 2016, Seeger and Sossa extended that notion to two cones. This was … Read more

Towards an efficient Augmented Lagrangian method for convex quadratic programming

Interior point methods have attracted most of the attention in the recent decades for solving large scale convex quadratic programming problems. In this paper we take a different route as we present an augmented Lagrangian method for convex quadratic programming based on recent developments for nonlinear programming. In our approach, box constraints are penalized while … Read more

Forecasting conceivable interest rate market scenarios and significant losses on interest rate portfolios using mathematical optimization

This study proposes a mathematical optimization programming model that simultaneously forecasts interest rate market scenarios and significant losses on interest rate market portfolios. The model includes three main components. A constraint condition is set using the Mahalanobis distance, which consists of innovation terms in a dynamic conditional correlation-generalized autoregressive conditional heteroscedasticity (DCC-GARCH) model that represent … Read more

Intersection disjunctions for reverse convex sets

We present a framework to obtain valid inequalities for optimization problems constrained by a reverse convex set, which is defined as the set of points in a polyhedron that lie outside a given open convex set. We are particularly interested in cases where the closure of the convex set is either non-polyhedral, or is defined … Read more

On High-order Model Regularization for Multiobjective Optimization

A p-order regularization method for finding weak stationary points of multiobjective optimization problems with constraints is introduced. Under Holder conditions on the derivatives of the objective functions, complexity results are obtained that generalize properties recently proved for scalar optimization. Article Download View On High-order Model Regularization for Multiobjective Optimization

Basis Pursuit Denoise with Nonsmooth Constraints

Level-set optimization formulations with data-driven constraints minimize a regularization functional subject to matching observations to a given error level. These formulations are widely used, particularly for matrix completion and sparsity promotion in data interpolation and denoising. The misfit level is typically measured in the l2 norm, or other smooth metrics. In this paper, we present … Read more

A New Sequential Optimality Condition for Constrained Nonsmooth Optimization

We introduce a sequential optimality condition for locally Lipschitz constrained nonsmooth optimization, verifiable just using derivative information, and which holds even in the absence of any constraint qualification. The proposed sequential optimality condition is not only novel for nonsmooth problems, but brings new insights for the smooth case as well. We present a practical algorithm … Read more