Stochastic Variance-Reduced Prox-Linear Algorithms for Nonconvex Composite Optimization

We consider the problem of minimizing composite functions of the form $f(g(x))+h(x)$, where~$f$ and~$h$ are convex functions (which can be nonsmooth) and $g$ is a smooth vector mapping. In addition, we assume that $g$ is the average of finite number of component mappings or the expectation over a family of random component mappings. We propose … Read more

New Bregman proximal type algorithms for solving DC optimization problems

Difference of Convex (DC) optimization problems have objective functions that are differences between two convex functions. Representative ways of solving these problems are the proximal DC algorithms, which require that the convex part of the objective function have L-smoothness. In this article, we propose the Bregman Proximal DC Algorithm (BPDCA) for solving large-scale DC optimization … Read more

Sums of Separable and Quadratic Polynomials

We study separable plus quadratic (SPQ) polynomials, i.e., polynomials that are the sum of univariate polynomials in different variables and a quadratic polynomial. Motivated by the fact that nonnegative separable and nonnegative quadratic polynomials are sums of squares, we study whether nonnegative SPQ polynomials are (i) the sum of a nonnegative separable and a nonnegative … Read more

Radial Duality Part I: Foundations

(Renegar, 2016) introduced a novel approach to transforming generic conic optimization problems into unconstrained, uniformly Lipschitz continuous minimization. We introduce radial transformations generalizing these ideas, equipped with an entirely new motivation and development that avoids any reliance on convex cones or functions. Perhaps of greatest practical importance, this facilitates the development of new families of … Read more

Radial Duality Part II: Applications and Algorithms

The first part of this work established the foundations of a radial duality between nonnegative optimization problems, inspired by the work of (Renegar, 2016). Here we utilize our radial duality theory to design and analyze projection-free optimization algorithms that operate by solving a radially dual problem. In particular, we consider radial subgradient, smoothing, and accelerated … Read more

Parallel Strategies for Direct Multisearch

Direct Multisearch (DMS) is a Derivative-free Optimization class of algorithms suited for computing approximations to the complete Pareto front of a given Multiobjective Optimization problem. It has a well-supported convergence analysis and simple implementations present a good numerical performance, both in academic test sets and in real applications. Recently, this numerical performance was improved with … Read more

An Accelerated Minimal Gradient Method with Momentum for Convex Quadratic Optimization

In this article we address the problem of minimizing a strictly convex quadratic function using a novel iterative method. The new algorithm is based on the well–known Nesterov’s accelerated gradient method. At each iteration of our scheme, the new point is computed by performing a line–search scheme using a search direction given by a linear … Read more

Homogeneous polynomials and spurious local minima on the unit sphere

We consider degree-d forms on the Euclidean unit sphere. We specialize to our setting a genericity result by Nie obtained in a more general framework. We exhibit an homogeneous polynomial Res in the coefficients of f, such that if Res(f) is not zero then all points that satisfy first- and second-order necessary optimality conditions are … Read more

An augmented Lagrangian method exploiting an active-set strategy and second-order information

In this paper, we consider nonlinear optimization problems with nonlinear equality constraints and bound constraints on the variables. For the solution of such problems, many augmented Lagrangian methods have been defined in the literature. Here, we propose to modify one of these algorithms, namely ALGENCAN by Andreani et al., in such a way to incorporate … Read more

A globally trust-region LP-Newton method for nonsmooth functions under the Hölder metric subregularity

We describe and analyse a globally convergent algorithm to find a possible nonisolated zero of a piecewise smooth mapping over a polyhedral set, such formulation includes Karush-Kuhn-Tucker (KKT) systems, variational inequalities problems, and generalized Nash equilibrium problems. Our algorithm is based on a modification of the fast locally convergent Linear Programming (LP)-Newton method with a … Read more