Three Enhancements for Optimization-Based Bound Tightening

Optimization-based bound tightening (OBBT) is one of the most effective procedures to reduce variable domains of nonconvex mixed-integer nonlinear programs (MINLPs). At the same time it is one of the most expensive bound tightening procedures, since it solves auxiliary linear programs (LPs)—up to twice the number of variables many. The main goal of this paper … Read more

Coordinate Friendly Structures, Algorithms and Applications

This paper focuses on coordinate update methods, which are useful for solving problems involving large or high-dimensional datasets. They decompose a problem into simple subproblems, where each updates one, or a small block of, variables while fixing others. These methods can deal with linear and nonlinear mappings, smooth and nonsmooth functions, as well as convex … Read more

A robust Lagrangian-DNN method for a class of quadratic optimization problems

The Lagrangian-doubly nonnegative (DNN) relaxation has recently been shown to provide effective lower bounds for a large class of nonconvex quadratic optimization problems (QOPs) using the bisection method combined with first-order methods by Kim, Kojima and Toh in 2016. While the bisection method has demonstrated the computational efficiency, determining the validity of a computed lower … Read more

Approximations and Generalized Newton Methods

We study local convergence of generalized Newton methods for both equations and inclusions by using known and new approximations and regularity properties at the solution. Including Kantorovich-type settings, our goal are statements about all (not only some) Newton sequences with appropriate initial points. Our basic tools are results of Klatte-Kummer (2002) and Kummer (1988, 1995), … Read more

Worst-Case Hardness of Approximation for Sparse Optimization with L0 Norm

In this paper, we consider sparse optimization problems with L0 norm penalty or constraint. We prove that it is strongly NP-hard to find an approximate optimal solution within certain error bound, unless P = NP. This provides a lower bound for the approximation error of any deterministic polynomial-time algorithm. Applying the complexity result to sparse … Read more

Iteration-complexity of a Rockafellar’s proximal method of multipliers for convex programming based on second-order approximations

This paper studies the iteration-complexity of a new primal-dual algorithm based on Rockafellar’s proximal method of multipliers (PMM) for solving smooth convex programming problems with inequality constraints. In each step, either a step of Rockafellar’s PMM for a second-order model of the problem is computed or a relaxed extragradient step is performed. The resulting algorithm … Read more

A Dual Gradient-Projection Method for Large-Scale Strictly Convex Quadratic Problems

The details of a solver for minimizing a strictly convex quadratic objective function subject to general linear constraints is presented. The method uses a gradient projection algorithm enhanced with subspace acceleration to solve the bound-constrained dual optimization problem. Such gradient projection methods are well-known, but are typically employed to solve the primal problem when only … Read more

Numerical Solution of Linear-Quadratic Optimal Control Problems for Switching System

In this paper we obtained an approach to the optimal switching control problem with unknown switching points which it is described in reference [1, 2]. In reference [1], the authors studied the Decomposition of Linear-Quadratic Optimal Control Problems for Two-Steps Systems. In [1], the authors assumed the switching point t1 is xed in the interval … Read more

Gradient Descent only Converges to Minimizers

We show that gradient descent converges to a local minimizer, almost surely with random initialization. This is proved by applying the Stable Manifold Theorem from dynamical systems theory. Article Download View Gradient Descent only Converges to Minimizers

On the convergence of stochastic bi-level gradient methods

We analyze the convergence of stochastic gradient methods for bi-level optimization problems. We address two specific cases: first when the outer objective function can be expressed as a finite sum of independent terms, and next when both the outer and inner objective functions can be expressed as finite sums of independent terms. We assume Lipschitz … Read more