Evaluation complexity bounds for smooth constrained nonlinear optimization using scaled KKT conditions and high-order models

Evaluation complexity for convexly constrained optimization is considered and it is shown first that the complexity bound of $O(\epsilon^{-3/2})$ proved by Cartis, Gould and Toint (IMAJNA 32(4) 2012, pp.1662-1695) for computing an $\epsilon$-approximate first-order critical point can be obtained under significantly weaker assumptions. Moreover, the result is generalized to the case where high-order derivatives are … Read more

Improved Damped Quasi-Newton Methods for Unconstrained Optimization

Recently, Al-Baali (2014) has extended the damped-technique in the modified BFGS method of Powell (1978) for Lagrange constrained optimization functions to the Broyden family of quasi-Newton methods for unconstrained optimization. Appropriate choices for the damped-parameter, which maintain the global and superlinear con- vergence property of these methods on convex functions and correct the Hessian approximations … Read more

A Linear Scalarization Proximal Point Method for Quasiconvex Multiobjective Minimization

In this paper we propose a linear scalarization proximal point algorithm for solving arbitrary lower semicontinuous quasiconvex multiobjective minimization problems. Under some natural assumptions and using the condition that the proximal parameters are bounded we prove the convergence of the sequence generated by the algorithm and when the objective functions are continuous, we prove the … Read more

A Linear Scalarization Proximal Point Method for Quasiconvex Multiobjective Minimization

In this paper we propose a linear scalarization proximal point algorithm for solving arbitrary lower semicontinuous quasiconvex multiobjective minimization problems. Under some natural assumptions and using the condition that the proximal parameters are bounded we prove the convergence of the sequence generated by the algorithm and when the objective functions are continuous, we prove the … Read more

A New Trust Region Method with Simple Model for Large-Scale Optimization

In this paper a new trust region method with simple model for solving large-scale unconstrained nonlinear optimization problems is proposed. By using the generalized weak quasi-Newton equations, we derive several schemes to determine the appropriate scalar matrix as the Hessian approximation. Under some reasonable conditions and the framework of the trust-region method, the global convergence … Read more

Randomized Derivative-Free Optimization of Noisy Convex Functions

We propose STARS, a randomized derivative-free algorithm for unconstrained optimization when the function evaluations are contaminated with random noise. STARS takes dynamic, noise-adjusted smoothing step-sizes that minimize the least-squares error between the true directional derivative of a noisy function and its finite difference approximation. We provide a convergence rate analysis of STARS for solving convex … Read more

On the unimodality of METRIC Approximation subject to normally distributed demands

METRIC Approximation is a popular model for supply chain management. We prove that it has a unimodal objective function when the demands of the n retailers are normally distributed. That allows us to solve it with a convergent sequence. This optimization method leads us to a closed-form equation of computational complexity O(n). Its solutions are … Read more

Worst-case evaluation complexity for unconstrained nonlinear optimization using high-order regularized models

The worst-case evaluation complexity for smooth (possibly nonconvex) unconstrained optimization is considered. It is shown that, if one is willing to use derivatives of the objective function up to order $p$ (for $p\geq 1$) and to assume Lipschitz continuity of the $p$-th derivative, then an $\epsilon$-approximate first-order critical point can be computed in at most … Read more

On Solving L-SR1 Trust-Region Subproblems

In this article, we consider solvers for large-scale trust-region subproblems when the quadratic model is defined by a limited-memory symmetric rank-one (L-SR1) quasi-Newton matrix. We propose a solver that exploits the compact representation of L-SR1 matrices. Our approach makes use of both an orthonormal basis for the eigenspace of the L-SR1 matrix and the Sherman- … Read more

On the steepest descent algorithm for quadratic functions

The steepest descent algorithm with exact line searches (Cauchy algorithm) is inefficient, generating oscillating step lengths and a sequence of points converging to the span of the eigenvectors associated with the extreme eigenvalues. The performance becomes very good if a short step is taken at every (say) 10 iterations. We show a new method for … Read more