A derivative-free trust-region approach for Low Order-Value Optimization problems

The Low Order-Value Optimization (LOVO) problem involves minimizing the minimum among a finite number of function values within a feasible set. LOVO has several practical applications such as robust parameter estimation, protein alignment, portfolio optimization, among others. In this work, we are interested in the constrained nonlinear optimization LOVO problem of minimizing the minimum between … Read more

Progressively Sampled Equality-Constrained Optimization

An algorithm is proposed, analyzed, and tested for solving continuous nonlinear-equality-constrained optimization problems where the constraints are defined by an expectation or an average over a large (finite) number of terms. The main idea of the algorithm is to solve a sequence of equality-constrained problems, each involving a finite sample of constraint-function terms, over which … Read more

Worst-Case Complexity of High-Order Algorithms for Pareto-Front Reconstruction

In this paper, we are concerned with a worst-case complexity analysis of a-posteriori algorithms for unconstrained multiobjective optimization. Specifically, we propose an algorithmic framework that generates sets of points by means of $p$th-order models regularized with a power $p+1$ of the norm of the step. Through a tailored search procedure, several trial points are generated … Read more

Block cubic Newton with greedy selection

A second-order block coordinate descent method is proposed for the unconstrained minimization of an objective function with a Lipschitz continuous Hessian. At each iteration, a block of variables is selected by means of a greedy (Gauss-Southwell) rule which considers the amount of first-order stationarity violation, then an approximate minimizer of a cubic model is computed … Read more

A Proximal-Gradient Method for Constrained Optimization

We present a new algorithm for solving optimization problems with objective functions that are the sum of a smooth function and a (potentially) nonsmooth regularization function, and nonlinear equality constraints. The algorithm may be viewed as an extension of the well-known proximal-gradient method that is applicable when constraints are not present. To account for nonlinear … Read more

Inexact Proximal-Gradient Methods with Support Identification

We consider the proximal-gradient method for minimizing an objective function that is the sum of a smooth function and a non-smooth convex function. A feature that distinguishes our work from most in the literature is that we assume that the associated proximal operator does not admit a closed-form solution. To address this challenge, we study … Read more

Polynomial worst-case iteration complexity of quasi-Newton primal-dual interior point algorithms for linear programming

Quasi-Newton methods are well known techniques for large-scale numerical optimization. They use an approximation of the Hessian in optimization problems or the Jacobian in system of nonlinear equations. In the Interior Point context, quasi-Newton algorithms compute low-rank updates of the matrix associated with the Newton systems, instead of computing it from scratch at every iteration. … Read more

Worst-Case Complexity of TRACE with Inexact Subproblem Solutions for Nonconvex Smooth Optimization

An algorithm for solving nonconvex smooth optimization problems is proposed, analyzed, and tested. The algorithm is an extension of the Trust Region Algorithm with Contractions and Expansions (TRACE) [Math. Prog. 162(1):132, 2017]. In particular, the extension allows the algorithm to use inexact solutions of the arising subproblems, which is an important feature for solving large-scale … Read more

A Subspace Acceleration Method for Minimization Involving a Group Sparsity-Inducing Regularizer

We consider the problem of minimizing an objective function that is the sum of a convex function and a group sparsity-inducing regularizer. Problems that integrate such regularizers arise in modern machine learning applications, often for the purpose of obtaining models that are easier to interpret and that have higher predictive accuracy. We present a new … Read more

Concise Complexity Analyses for Trust-Region Methods

Concise complexity analyses are presented for simple trust region algorithms for solving unconstrained optimization problems. In contrast to a traditional trust region algorithm, the algorithms considered in this paper require certain control over the choice of trust region radius after any successful iteration. The analyses highlight the essential algorithm components required to obtain certain complexity … Read more