Multilevel Optimization Methods: Convergence and Problem Structure

Building upon multigrid methods, the framework of multilevel optimization methods was developed to solve structured optimization problems, including problems in optimal control, image processing, etc. In this paper, we give a broader view of the multilevel framework and establish some connections between multilevel algorithms and the other approaches. An interesting case of the so called … Read more

A second-order optimality condition with first- and second-order complementarity associated with global convergence of algorithms

We develop a new notion of second-order complementarity with respect to the tangent subspace related to second-order necessary optimality conditions by the introduction of so-called tangent multipliers. We prove that around a local minimizer, a second-order stationarity residual can be driven to zero while controlling the growth of Lagrange multipliers and tangent multipliers, which gives … Read more

Quadratic regularization with cubic descent for unconstrained optimization

Cubic-regularization and trust-region methods with worst case first-order complexity $O(\varepsilon^{-3/2})$ and worst-case second-order complexity $O(\varepsilon^{-3})$ have been developed in the last few years. In this paper it is proved that the same complexities are achieved by means of a quadratic regularization method with a cubic sufficient-descent condition instead of the more usual predicted-reduction based descent. … Read more

On the local convergence analysis of the Gradient Sampling method

The Gradient Sampling method is a recently developed tool for solving unconstrained nonsmooth optimization problems. Using just first order information about the objective function, it generalizes the steepest descent method, one of the most classical methods to minimize a smooth function. This manuscript aims at determining under which circumstances one can expect the same local … Read more

Error bounds for nonlinear semidefinite optimization

In this paper, error bounds for nonlinear semidefinite optimization problem is considered. We assume the second order sufficient condition, the strict complementarity condition and the MFCQ condition at the KKT point. The nondegeneracy condition is not assumed in this paper. Therefore the Jacobian operator of the equality part of the KKT conditions is not assumed … Read more

Complete mixed integer linear programming formulations for modularity density based clustering

Modularity density maximization is a clustering method that improves some issues of the commonly-used modularity maximization approach. Recently, some Mixed-Integer Linear Programming (MILP) reformulations have been proposed in the literature for the modularity density maximization problem, but they require as input the solution of a set of auxiliary binary Non-Linear Programs (NLPs). These can become … Read more

Analysis and Implementation of an Asynchronous Optimization Algorithm for the Parameter Server

This paper presents an asynchronous incremental aggregated gradient algorithm and its implementation in a parameter server framework for solving regularized optimization problems. The algorithm can handle both general convex (possibly non-smooth) regularizers and general convex constraints. When the empirical data loss is strongly convex, we establish linear convergence rate, give explicit expressions for step-size choices … Read more

R-Linear Convergence of Limited Memory Steepest Descent

The limited memory steepest descent method (LMSD) proposed by Fletcher is an extension of the Barzilai-Borwein “two-point step size” strategy for steepest descent methods for solving unconstrained optimization problems. It is known that the Barzilai-Borwein strategy yields a method with an R-linear rate of convergence when it is employed to minimize a strongly convex quadratic. … Read more

Optimized choice of parameters in interior-point methods for linear programming

In this work, we propose a predictor-corrector interior point method for linear programming in a primal-dual context, where the next iterate is chosen by the minimization of a polynomial merit function of three variables: the first is the steplength, the second defines the central path and the third models the weight of a corrector direction. … Read more

Can linear superiorization be useful for linear optimization problems?

Linear superiorization considers linear programming problems but instead of attempting to solve them with linear optimization methods it employs perturbation resilient feasibility-seeking algorithms and steers them toward reduced (not necessarily minimal) target function values. The two questions that we set out to explore experimentally are (i) Does linear superiorization provide a feasible point whose linear … Read more