First and second order optimality conditions for piecewise smooth objective functions

Any piecewise smooth function that is specified by an evaluation procedures involving smooth elemental functions and piecewise linear functions like min and max can be represented in the so-called abs-normal form. By an extension of algorithmic, or automatic differentiation, one can then compute certain first and second order derivative vectors and matrices that represent a … Read more

A note on robust descent in differentiable optimization

In this note, we recall two solutions to alleviate the catastrophic cancellations that occur when comparing function values in descent algorithms. The automatic finite differencing approach (Dussault and Hamelin) was shown useful to trust region and line search variants. The main original contribution is to successfully adapt the line search strategy (Hager and Zhang) for … Read more

Evaluation complexity bounds for smooth constrained nonlinear optimization using scaled KKT conditions and high-order models

Evaluation complexity for convexly constrained optimization is considered and it is shown first that the complexity bound of $O(\epsilon^{-3/2})$ proved by Cartis, Gould and Toint (IMAJNA 32(4) 2012, pp.1662-1695) for computing an $\epsilon$-approximate first-order critical point can be obtained under significantly weaker assumptions. Moreover, the result is generalized to the case where high-order derivatives are … Read more

Improved Damped Quasi-Newton Methods for Unconstrained Optimization

Recently, Al-Baali (2014) has extended the damped-technique in the modified BFGS method of Powell (1978) for Lagrange constrained optimization functions to the Broyden family of quasi-Newton methods for unconstrained optimization. Appropriate choices for the damped-parameter, which maintain the global and superlinear con- vergence property of these methods on convex functions and correct the Hessian approximations … Read more

A Linear Scalarization Proximal Point Method for Quasiconvex Multiobjective Minimization

In this paper we propose a linear scalarization proximal point algorithm for solving arbitrary lower semicontinuous quasiconvex multiobjective minimization problems. Under some natural assumptions and using the condition that the proximal parameters are bounded we prove the convergence of the sequence generated by the algorithm and when the objective functions are continuous, we prove the … Read more

A Linear Scalarization Proximal Point Method for Quasiconvex Multiobjective Minimization

In this paper we propose a linear scalarization proximal point algorithm for solving arbitrary lower semicontinuous quasiconvex multiobjective minimization problems. Under some natural assumptions and using the condition that the proximal parameters are bounded we prove the convergence of the sequence generated by the algorithm and when the objective functions are continuous, we prove the … Read more

A New Trust Region Method with Simple Model for Large-Scale Optimization

In this paper a new trust region method with simple model for solving large-scale unconstrained nonlinear optimization problems is proposed. By using the generalized weak quasi-Newton equations, we derive several schemes to determine the appropriate scalar matrix as the Hessian approximation. Under some reasonable conditions and the framework of the trust-region method, the global convergence … Read more

Randomized Derivative-Free Optimization of Noisy Convex Functions

We propose STARS, a randomized derivative-free algorithm for unconstrained optimization when the function evaluations are contaminated with random noise. STARS takes dynamic, noise-adjusted smoothing step-sizes that minimize the least-squares error between the true directional derivative of a noisy function and its finite difference approximation. We provide a convergence rate analysis of STARS for solving convex … Read more

On the unimodality of METRIC Approximation subject to normally distributed demands

METRIC Approximation is a popular model for supply chain management. We prove that it has a unimodal objective function when the demands of the n retailers are normally distributed. That allows us to solve it with a convergent sequence. This optimization method leads us to a closed-form equation of computational complexity O(n). Its solutions are … Read more

Worst-case evaluation complexity for unconstrained nonlinear optimization using high-order regularized models

The worst-case evaluation complexity for smooth (possibly nonconvex) unconstrained optimization is considered. It is shown that, if one is willing to use derivatives of the objective function up to order $p$ (for $p\geq 1$) and to assume Lipschitz continuity of the $p$-th derivative, then an $\epsilon$-approximate first-order critical point can be computed in at most … Read more