Convergence and evaluation-complexity analysis of a regularized tensor-Newton method for solving nonlinear least-squares problems subject to convex constraints

Given a twice-continuously differentiable vector-valued function $r(x)$, a local minimizer of $\|r(x)\|_2$ within a convex set is sought. We propose and analyse tensor-Newton methods, in which $r(x)$ is replaced locally by its second-order Taylor approximation. Convergence is controlled by regularization of various orders. We establish global convergence to a constrained first-order critical point of $\|r(x)\|_2$, … Read more

A note on solving nonlinear optimization problems in variable precision

This short note considers an efficient variant of the trust-region algorithm with dynamic accuracy proposed Carter (1993) and Conn, Gould and Toint (2000) as a tool for very high-performance computing, an area where it is critical to allow multi-precision computations for keeping the energy dissipation under control. Numerical experiments are presented indicating that the use … Read more

On the complexity of solving feasibility problems

We consider feasibility problems defined by a set of constraints that exhibit gradient H\”older continuity plus additional constraints defined by the affordability of obtaining approximate minimizers of quadratic models onto the associated feasible set. Each iteration of the method introduced in this paper involves the approximate minimization of a two-norm regularized quadratic subject to the … Read more

A class of derivative-free CG projection methods for nonsmooth equations with an application to the LASSO problem

In this paper, based on a modified Gram-Schmidt (MGS) process, we propose a class of derivative-free conjugate gradient (CG) projection methods for nonsmooth equations with convex constraints. Two attractive features of the new class of methods are: (i) its generated direction contains a free vector, which can be set as any vector such that the … Read more

A Unified Framework for Sparse Relaxed Regularized Regression: SR3

Regularized regression problems are ubiquitous in statistical modeling, signal processing, and machine learning. Sparse regression in particular has been instrumental in scientific model discovery, including compressed sensing applications, vari- able selection, and high-dimensional analysis. We propose a broad framework for sparse relaxed regularized regression, called SR3. The key idea is to solve a relaxation of … Read more

Sharp worst-case evaluation complexity bounds for arbitrary-order nonconvex optimization with inexpensive constraints

We provide sharp worst-case evaluation complexity bounds for nonconvex minimization problems with general inexpensive constraints, i.e.\ problems where the cost of evaluating/enforcing of the (possibly nonconvex or even disconnected) constraints, if any, is negligible compared to that of evaluating the objective function. These bounds unify, extend or improve all known upper and lower complexity bounds … Read more

Subset selection in sparse matrices

In subset selection we search for the best linear predictor that involves a small subset of variables. From a computational complexity viewpoint, subset selection is NP-hard and few classes are known to be solvable in polynomial time. Using mainly tools from discrete geometry, we show that some sparsity conditions on the original data matrix allow … Read more

Local convergence analysis of the Levenberg-Marquardt framework for nonzero-residue nonlinear least-squares problems under an error bound condition

The Levenberg-Marquardt method (LM) is widely used for solving nonlinear systems of equations, as well as nonlinear least-squares prob- lems. In this paper, we consider local convergence issues of the LM method when applied to nonzero-residue nonlinear least-squares problems under an error bound condition, which is weaker than requiring full-rank of the Jacobian in a … Read more

A stochastic Levenberg-Marquardt method using random models with complexity results and application to data assimilation

Globally convergent variants of the Gauss-Newton algorithm are often the methods of choice to tackle nonlinear least-squares problems. Among such frameworks, Levenberg-Marquardt and trust-region methods are two well-established, similar paradigms. Both schemes have been studied when the Gauss-Newton model is replaced by a random model that is only accurate with a given probability. Trust-region schemes … Read more

Quasi-Newton approaches to Interior Point Methods for quadratic problems

Interior Point Methods (IPM) rely on the Newton method for solving systems of nonlinear equations. Solving the linear systems which arise from this approach is the most computationally expensive task of an interior point iteration. If, due to problem’s inner structure, there are special techniques for efficiently solving linear systems, IPMs enjoy fast convergence and … Read more