A general merit function-based global convergent framework for nonlinear optimization

In this paper, we revisit the convergence theory of the inexact restoration paradigm for non-linear optimization. The paper first identifies the basic elements of a globally convergent method based on merit functions. Then, the inexact restoration method that employs a two-phase iteration is introduced as a special case. A specific implementation is presented that is … Read more

Mixed-Integer Bilevel Optimization with Nonconvex Quadratic Lower-Level Problems: Complexity and a Solution Method

We study bilevel problems with a convex quadratic mixed-integer upper-level and a nonconvex quadratic, purely continuous lower-level problem. We prove $\Sigma_p^2$-hardness of this class of problems, derive an iterative lower- and upper-bounding scheme, and show its finiteness and correctness in the sense that it computes globally optimal points or proves infeasibility of the instance. To … Read more

Efficient parameter-free restarted accelerated gradient methods for convex and strongly convex optimization

This paper develops a new parameter-free restarted method, namely RPF-SFISTA, and a new parameter-free aggressive regularization method, namely A-REG, for solving strongly convex and convex composite optimization problems, respectively. RPF-SFISTA has the major advantage that it requires no knowledge of both the strong convexity parameter of the entire composite objective and the Lipschitz constant of … Read more

Examples of slow convergence for adaptive regularization optimization methods are not isolated

The adaptive regularization algorithm for unconstrained nonconvex optimization was shown in Nesterov and Polyak (2006) and Cartis, Gould and Toint (2011) to  require, under standard assumptions, at most O(\epsilon^{3/(3-q)}) evaluations of the objective function and its derivatives of degrees one and two to produce an \epsilon-approximate critical point of order q in {1,2}. This bound … Read more

Horoballs and the subgradient method

To explore convex optimization on Hadamard spaces, we consider an iteration in the style of a subgradient algorithm. Traditionally, such methods assume that the underlying spaces are manifolds and that the objectives are geodesically convex: the methods are described using tangent spaces and exponential maps. By contrast, our iteration applies in a general Hadamard space, … Read more

The Balanced Facility Location Problem: Complexity and Heuristics

In a recent work, Schmitt and Singh propose a new quadratic facility location model to address ecological challenges faced by policymakers in Bavaria, Germany. Building on this previous work, we significantly extend our understanding of this new problem. We develop connections to traditional combinatorial optimization models and show the problem is NP-hard. We then develop … Read more

Mind the \(\tilde{O}\): asymptotically better, but still impractical, quantum distributed algorithms

\(\) The CONGEST and CONGEST-CLIQUE models have been carefully studied to represent situations where the communication bandwidth between processors in a network is severely limited. Messages of only \( O(log(n)) \) bits of information each may be sent between processors in each round. The quantum versions of these models allow the processors instead to communicate … Read more

On the Number of Pivots of Dantzig’s Simplex Methods for Linear and Convex Quadratic Programs

Refining and extending works by Ye and Kitahara-Mizuno, this paper presents new results on the number of pivots of simplex-type methods for solving linear programs of the Leontief kind, certain linear complementarity problems of the P kind, and nonnegative constrained convex quadratic programs. Our results contribute to the further understanding of the complexity and efficiency … Read more

The complexity of first-order optimization methods from a metric perspective

A central tool for understanding first-order optimization algorithms is the Kurdyka-Lojasiewicz inequality. Standard approaches to such methods rely crucially on this inequality to leverage sufficient decrease conditions involving gradients or subgradients. However, the KL property fundamentally concerns not subgradients but rather “slope”, a purely metric notion. By highlighting this view, and avoiding any use of … Read more

A New Inexact Proximal Linear Algorithm with Adaptive Stopping Criteria for Robust Phase Retrieval

This paper considers the robust phase retrieval problem, which can be cast as a nonsmooth and nonconvex optimization problem. We propose a new inexact proximal linear algorithm with the subproblem being solved inexactly. Our contributions are two adaptive stopping criteria for the subproblem. The convergence behavior of the proposed methods is analyzed. Through experiments on … Read more