Trust-region algorithms: probabilistic complexity and intrinsic noise with applications to subsampling techniques

A trust-region algorithm is presented for finding approximate minimizers of smooth unconstrained functions whose values and derivatives are subject to random noise. It is shown that, under suitable probabilistic assumptions, the new method finds (in expectation) an epsilon-approximate minimizer of arbitrary order q > 0 in at most O(epsilon^{-(q+1)}) inexact evaluations of the function and … Read more

A stochastic first-order trust-region method with inexact restoration for finite-sum minimization

We propose a stochastic first-order trust-region method with inexact function and gradient evaluations for solving finite-sum minimization problems. At each iteration, the function and the gradient are approximated by sampling. The sample size in gradient approximations is smaller than the sample size in function approximations and the latter is determined using a deterministic rule inspired … Read more

The Impact of Noise on Evaluation Complexity: The Deterministic Trust-Region Case

Intrinsic noise in objective function and derivatives evaluations may cause premature termination of optimization algorithms. Evaluation complexity bounds taking this situation into account are presented in the framework of a deterministic trust-region method. The results show that the presence of intrinsic noise may dominate these bounds, in contrast with what is known for methods in … Read more

High-order Evaluation Complexity of a Stochastic Adaptive Regularization Algorithm for Nonconvex Optimization Using Inexact Function Evaluations and Randomly Perturbed Derivatives

A stochastic adaptive regularization algorithm allowing random noise in derivatives and inexact function values is proposed for computing strong approximate minimizers of any order for inexpensively constrained smooth optimization problems. For an objective function with Lipschitz continuous p-th derivative in a convex neighbourhood of the feasible set and given an arbitrary optimality order q, it … Read more

A relaxed interior point method for low-rank semidefinite programming problems with applications to matrix completion

A new relaxed variant of interior point method for low-rank semidefinite programming problems is proposed in this paper. The method is a step outside of the usual interior point framework. In anticipation to converging to a low-rank primal solution, a special nearly low-rank form of all primal iterates is imposed. To accommodate such a (restrictive) … Read more

Adaptive cubic regularization methods with dynamic inexact Hessian information and applications to finite-sum minimization

Abstract. We consider the Adaptive Regularization with Cubics approach for solving nonconvex optimization problems and propose a new variant based on inexact Hessian information chosen dynamically. The theoretical analysis of the proposed procedure is given. The key property of ARC framework, constituted by optimal worst-case function/derivative evaluation bounds for first- and second-order critical point, is … Read more

Inexact restoration with subsampled trust-region methods for finite-sum minimization

Convex and nonconvex finite-sum minimization arises in many scientific computing and machine learning applications. Recently, first-order and second-order methods where objective functions, gradients and Hessians are approximated by randomly sampling components of the sum have received great attention. We propose a new trust-region method which employs suitable approximations of the objective function, gradient and Hessian … Read more

Adaptive regularization algorithms with inexact evaluations for nonconvex optimization

A regularization algorithm using inexact function values and inexact derivatives is proposed and its evaluation complexity analyzed. This algorithm is applicable to unconstrained problems and to problems with inexpensive constraints (that is constraints whose evaluation and enforcement has negligible cost) under the assumption that the derivative of highest degree is beta-H\”{o}lder continuous. It features a … Read more

Adaptive Cubic Regularization methods with dynamic inexact Hessian information and applications to finite-sum minimization

We consider the Adaptive Regularization with Cubics approach for solving nonconvex optimization problems and propose a new variant based on inexact Hessian information chosen dynamically. The theoretical analysis of the proposed procedure is given. The key property of ARC framework, constituted by optimal worst-case function/derivative evaluation bounds for first- and second-order critical point, is guaranteed. … Read more

Stable interior point method for convex quadratic programming with strict error bounds

We present a short step interior point method for solving a class of nonlinear programming problems with quadratic objective function. Convex quadratic programming problems can be reformulated as problems in this class. The method is shown to have weak polynomial time complexity. A complete proof of the numerical stability of the method is provided. No … Read more