Inexact Proximal Point Methods for Quasiconvex Minimization on Hadamard Manifolds

In this paper we present two inexact proximal point algorithms to solve minimization problems for quasiconvex objective functions on Hadamard manifolds. We prove that under natural assumptions the sequence generated by the algorithms are well defined and converge to critical points of the problem. We also present an application of the method to demand theory … Read more

Solving disjunctive optimization problems by generalized semi-infinite optimization techniques

We describe a new possibility to model disjunctive optimization problems as generalized semi-infinite programs. In contrast to existing methods, for our approach neither a conjunctive nor a disjunctive normal form is expected. Applying existing lower level reformulations for the corresponding semi-infinite program we derive conjunctive nonlinear problems without any logical expressions, which can be locally … Read more

Optimality and complexity for constrained optimization problems with nonconvex regularization

In this paper, we consider a class of constrained optimization problems where the feasible set is a general closed convex set and the objective function has a nonsmooth, nonconvex regularizer. Such regularizer includes widely used SCAD, MCP, logistic, fraction, hard thresholding and non-Lipschitz $L_p$ penalties as special cases. Using the theory of the generalized directional … Read more

Nonsmooth Methods for Control Design with Integral Quadratic Constraints

We develop an optimization technique to compute local solutions to synthesis problems subject to integral quadratic constraints (IQCs). We use the fact that IQCs may be transformed into semi-infinite maximum eigenvalue constraints over the frequency axis and approach them via nonsmooth optimization methods. We develop a suitable spectral bundle method and prove its convergence in … Read more

Metric subregularity of composition set-valued mappings with applications to fixed point theory

In this paper we underline the importance of the parametric subregularity property of set-valued mappings, defined with respect to fixed sets. We show that this property appears naturally for some very simple mappings which play an important role in the theory of metric regularity. We prove a result concerning the preservation of metric subregularity at … Read more

Trust-region methods without using derivatives: Worst case complexity and the non-smooth case

Trust-region methods are a broad class of methods for continuous optimization that found application in a variety of problems and contexts. In particular, they have been studied and applied for problems without using derivatives. The analysis of trust-region derivative-free methods has focused on global convergence, and they have been proved to generate a sequence of … Read more

LP formulations for mixed-integer polynomial optimization problems

We present polynomial-time algorithms for constrained optimization problems overwhere the intersection graph of the constraint set has bounded tree-width. In the case of binary variables we obtain exact, polynomial-size linear programming formulations for the problem. In the mixed-integer case with bounded variables we obtain polynomial-size linear programming representations that attain guaranteed optimality and feasibility bounds. … Read more

Regularity of collections of sets and convergence of inexact alternating projections

We study the usage of regularity properties of collections of sets in convergence analysis of alternating projection methods for solving feasibility problems. Several equivalent characterizations of these properties are provided. Two settings of inexact alternating projections are considered and the corresponding convergence estimates are established and discussed. Article Download View Regularity of collections of sets … Read more

An optimal subgradient algorithm for large-scale bound-constrained convex optimization

This paper shows that the OSGA algorithm — which uses first-order information to solve convex optimization problems with optimal complexity — can be used to efficiently solve arbitrary bound-constrained convex optimization problems. This is done by constructing an explicit method as well as an inexact scheme for solving the bound-constrained rational subproblem required by OSGA. … Read more

An optimal subgradient algorithm for large-scale convex optimization in simple domains

This paper shows that the optimal subgradient algorithm, OSGA, proposed in \cite{NeuO} can be used for solving structured large-scale convex constrained optimization problems. Only first-order information is required, and the optimal complexity bounds for both smooth and nonsmooth problems are attained. More specifically, we consider two classes of problems: (i) a convex objective with a … Read more