Adaptive regularization algorithms with inexact evaluations for nonconvex optimization

A regularization algorithm using inexact function values and inexact derivatives is proposed and its evaluation complexity analyzed. This algorithm is applicable to unconstrained problems and to problems with inexpensive constraints (that is constraints whose evaluation and enforcement has negligible cost) under the assumption that the derivative of highest degree is beta-H\”{o}lder continuous. It features a … Read more

On limited-memory quasi-Newton methods for minimizing a quadratic function

The main focus in this paper is exact linesearch methods for minimizing a quadratic function whose Hessian is positive definite. We give two classes of limited-memory quasi-Newton Hessian approximations that generate search directions parallel to those of the method of preconditioned conjugate gradients, and hence give finite termination on quadratic optimization problems. The Hessian approximations … Read more

Understanding the Acceleration Phenomenon via High-Resolution Differential Equations

Gradient-based optimization algorithms can be studied from the perspective of limiting or- dinary differential equations (ODEs). Motivated by the fact that existing ODEs do not distin- guish between two fundamentally different algorithms—Nesterov’s accelerated gradient method for strongly convex functions (NAG-SC) and Polyak’s heavy-ball method—we study an alter- native limiting process that yields high-resolution ODEs. We … Read more

A Subsampling Line-Search Method with Second-Order Results

In many contemporary optimization problems such as those arising in machine learning, it can be computationally challenging or even infeasible to evaluate an entire function or its derivatives. This motivates the use of stochastic algorithms that sample problem data, which can jeopardize the guarantees obtained through classical globalization techniques in optimization such as a trust … Read more

Nonmonotone line searches for unconstrained multiobjective optimization problems

In the last two decades, many descent methods for multiobjective optimization problems were proposed. In particular, the steepest descent and the Newton methods were studied for the unconstrained case. In both methods, the search directions are computed by solving convex subproblems, and the stepsizes are obtained by an Armijo-type line search. As a consequence, the … Read more

A Partial PPA block-wise ADMM for Multi-Block Constrained Separable Convex Optimization

The alternating direction method of multipliers(ADMM) has been proved to be effective for solving two-block separable convex optimization subject to linear constraints. However, it is not necessarily convergent when it is extended to multiple-block case directly. One remedy could be regrouping multiple-block variables into two groups firstly and then adopting the classic ADMM to the … Read more

Efficient global unconstrained black box optimization

For the unconstrained optimization of black box functions, this paper introduces a new randomized algorithm called VRBBO. In practice, VRBBO matches the quality of other state-of-the-art algorithms for finding, in small and large dimensions, a local minimizer with reasonable accuracy. Although our theory guarantees only local minimizers our heuristic techniques turn VRBBO into an efficient … Read more

Adaptive Cubic Regularization methods with dynamic inexact Hessian information and applications to finite-sum minimization

We consider the Adaptive Regularization with Cubics approach for solving nonconvex optimization problems and propose a new variant based on inexact Hessian information chosen dynamically. The theoretical analysis of the proposed procedure is given. The key property of ARC framework, constituted by optimal worst-case function/derivative evaluation bounds for first- and second-order critical point, is guaranteed. … Read more

A stochastic Levenberg-Marquardt method using random models with complexity results and application to data assimilation

Globally convergent variants of the Gauss-Newton algorithm are often the methods of choice to tackle nonlinear least-squares problems. Among such frameworks, Levenberg-Marquardt and trust-region methods are two well-established, similar paradigms. Both schemes have been studied when the Gauss-Newton model is replaced by a random model that is only accurate with a given probability. Trust-region schemes … Read more

A limited-memory optimization method using the infinitely many times repeated BNS update and conjugate directions

To improve the performance of the limited-memory variable metric L-BFGS method for large scale unconstrained optimization, repeating of some BFGS updates was proposed in [1, 2]. But the suitable extra updates need to be selected carefully, since the repeating process can be time consuming. We show that for the limited-memory variable metric BNS method, matrix … Read more