Polynomial Root Radius Optimization with Affine Constraints

The root radius of a polynomial is the maximum of the moduli of its roots (zeros). We consider the following optimization problem: minimize the root radius over monic polynomials of degree $n$, with either real or complex coefficients, subject to $k$ consistent affine constraints on the coefficients. We show that there always exists an optimal … Read more

Inexact Proximal Point Methods for Quasiconvex Minimization on Hadamard Manifolds

In this paper we present two inexact proximal point algorithms to solve minimization problems for quasiconvex objective functions on Hadamard manifolds. We prove that under natural assumptions the sequence generated by the algorithms are well defined and converge to critical points of the problem. We also present an application of the method to demand theory … Read more

Nonsmooth Methods for Control Design with Integral Quadratic Constraints

We develop an optimization technique to compute local solutions to synthesis problems subject to integral quadratic constraints (IQCs). We use the fact that IQCs may be transformed into semi-infinite maximum eigenvalue constraints over the frequency axis and approach them via nonsmooth optimization methods. We develop a suitable spectral bundle method and prove its convergence in … Read more

An optimal subgradient algorithm for large-scale bound-constrained convex optimization

This paper shows that the OSGA algorithm — which uses first-order information to solve convex optimization problems with optimal complexity — can be used to efficiently solve arbitrary bound-constrained convex optimization problems. This is done by constructing an explicit method as well as an inexact scheme for solving the bound-constrained rational subproblem required by OSGA. … Read more

An optimal subgradient algorithm for large-scale convex optimization in simple domains

This paper shows that the optimal subgradient algorithm, OSGA, proposed in \cite{NeuO} can be used for solving structured large-scale convex constrained optimization problems. Only first-order information is required, and the optimal complexity bounds for both smooth and nonsmooth problems are attained. More specifically, we consider two classes of problems: (i) a convex objective with a … Read more

An inertial forward-backward algorithm for the minimization of the sum of two nonconvex functions

We propose a forward-backward proximal-type algorithm with inertial/memory effects for minimizing the sum of a nonsmooth function with a smooth one in the nonconvex setting. The sequence of iterates generated by the algorithm converges to a critical point of the objective function provided an appropriate regularization of the objective satisfies the Kurdyka-Lojasiewicz inequality, which is … Read more

Randomized First-order Methods for Saddle Point Optimization

In this paper, we present novel randomized algorithms for solving saddle point problems whose dual feasible region is a direct product of many convex sets. Our algorithms can achieve ${\cal O}(1/N)$ rate of convergence by solving only one dual subproblem at each iteration. Our algorithms can also achieve ${\cal O}(1/N^2)$ rate of convergence if a … Read more

An inertial Tseng’s type proximal algorithm for nonsmooth and nonconvex optimization problems

We investigate the convergence of a forward-backward-forward proximal-type algorithm with inertial and memory effects when minimizing the sum of a nonsmooth function with a smooth one in the absence of convexity. The convergence is obtained provided an appropriate regularization of the objective satisfies the Kurdyka-\L{}ojasiewicz inequality, which is for instance fulfilled for semi-algebraic functions. ArticleDownload … Read more

A Quasi-Newton Algorithm for Nonconvex, Nonsmooth Optimization with Global Convergence Guarantees

A line search algorithm for minimizing nonconvex and/or nonsmooth objective functions is presented. The algorithm is a hybrid between a standard Broyden–Fletcher–Goldfarb–Shanno (BFGS) and an adaptive gradient sampling (GS) method. The BFGS strategy is employed because it typically yields fast convergence to the vicinity of a stationary point, and together with the adaptive GS strategy … Read more

Forward-backward truncated Newton methods for convex composite optimization

This paper proposes two proximal Newton-CG methods for convex nonsmooth optimization problems in composite form. The algorithms are based on a a reformulation of the original nonsmooth problem as the unconstrained minimization of a continuously differentiable function, namely the forward-backward envelope (FBE). The first algorithm is based on a standard line search strategy, whereas the … Read more