An FPTAS for Optimizing a Class of Low-Rank Functions Over a Polytope

We present a fully polynomial time approximation scheme (FPTAS) for optimizing a very general class of nonlinear functions of low rank over a polytope. Our approximation scheme relies on constructing an approximate Pareto-optimal front of the linear functions which constitute the given low-rank function. In contrast to existing results in the literature, our approximation scheme … Read more

A Note About The Complexity Of Minimizing Nesterov’s Smooth Chebyshev-Rosenbrock Function

This short note considers and resolves the apparent contradiction between known worst-case complexity results for first and second-order methods for solving unconstrained smooth nonconvex optimization problems and a recent note by Jarre (2011) implying a very large lower bound on the number of iterations required to reach the solution’s neighbourhood for a specific problem with … Read more

A Note on the Implementation of an Interior-Point Algorithm for Nonlinear Optimization with Inexact Step Computations

This paper describes an implementation of an interior-point algorithm for large-scale nonlinear optimization. It is based on the algorithm proposed by Curtis et al. (SIAM J Sci Comput 32:3447–3475, 2010), a method that possesses global convergence guarantees to first-order stationary points with the novel feature that inexact search direction calculations are allowed in order to … Read more

Complexity bounds for second-order optimality in unconstrained optimization

This paper examines worst-case evaluation bounds for finding weak minimizers in unconstrained optimization. For the cubic regularization algorithm, Nesterov and Polyak (2006) and Cartis, Gould and Toint (2010) show that at most O(epsilon^{-3}) iterations may have to be performed for finding an iterate which is within epsilon of satisfying second-order optimality conditions. We first show … Read more

On the oracle complexity of first-order and derivative-free algorithms for smooth nonconvex minimization

The (optimal) function/gradient evaluations worst-case complexity analysis available for the Adaptive Regularizations algorithms with Cubics (ARC) for nonconvex smooth unconstrained optimization is extended to finite-difference versions of this algorithm, yielding complexity bounds for first-order and derivative free methods applied on the same problem class. A comparison with the results obtained for derivative-free methods by Vicente … Read more

A polynomial case of cardinality constrained quadratic optimization problem

We investigate in this paper a fixed parameter polynomial algorithm for the cardinality constrained quadratic optimization problem, which is NP-hard in general. More specifically, we prove that, given a problem of size $n$, the number of decision variables, and $s$, the cardinality, if, for some $0

A Penalty-Interior-Point Algorithm for Nonlinear Constrained Optimization

Penalty and interior-point methods for nonlinear optimization problems have enjoyed great successes for decades. Penalty methods have proved to be effective for a variety of problem classes due to their regularization effects on the constraints. They have also been shown to allow for rapid infeasibility detection. Interior-point methods have become the workhorse in large-scale optimization … Read more

A Sequential Quadratic Programming Algorithm for Nonconvex, Nonsmooth Constrained Optimization

We consider optimization problems with objective and constraint functions that may be nonconvex and nonsmooth. Problems of this type arise in important applications, many having solutions at points of nondifferentiability of the problem functions. We present a line search algorithm for situations when the objective and constraint functions are locally Lipschitz and continuously differentiable on … Read more

Identifying Active Manifolds in Regularization Problems

In this work we consider the problem $\min_x \{ f(x) + P(x) \}$, where $f$ is $\mathcal{C}^2$ and $P$ is nonsmooth, but contains an underlying smooth substructure. Specifically, we assume the function $P$ is prox-regular partly smooth with respect to a active manifold $\M$. Recent work by Tseng and Yun \cite{tseng-yun-2009}, showed that such a … Read more

Nonconvex Robust Optimization

We propose a novel robust optimization technique, which is applicable to nonconvex and simulation-based problems. Robust optimization finds decisions with the best worst-case performance under uncertainty. If constraints are present, decisions should also be feasible under perturbations. In the real-world, many problems are nonconvex and involve computer-based simulations. In these applications, the relationship between decision … Read more