Acceleration of Primal-Dual Methods by Preconditioning and Simple Subproblem Procedures

Primal-Dual Hybrid Gradient (PDHG) and Alternating Direction Method of Multipliers (ADMM) are two widely-used first-order optimization methods. They reduce a difficult problem to simple subproblems, so they are easy to implement and have many applications. As first-order methods, however, they are sensitive to problem conditions and can struggle to reach the desired accuracy. To improve … Read more

Scoring positive semidefinite cutting planes for quadratic optimization via trained neural networks

Semidefinite programming relaxations complement polyhedral relaxations for quadratic optimization, but global optimization solvers built on polyhedral relaxations cannot fully exploit this advantage. This paper develops linear outer-approximations of semidefinite constraints that can be effectively integrated into global solvers. The difference from previous work is that our proposed cuts are (i) sparser with respect to the … Read more

A Doubly Accelerated Inexact Proximal Point Method for Nonconvex Composite Optimization Problems

This paper describes and establishes the iteration-complexity of a doubly accelerated inexact proximal point (D-AIPP) method for solving the nonconvex composite minimization problem whose objective function is of the form f+h where f is a (possibly nonconvex) differentiable function whose gradient is Lipschitz continuous and h is a closed convex function with bounded domain. D-AIPP … Read more

A random search method for finding ‘K ≥ 2’ number of ranked optimal solution to an assignment problem

A need for an optimal solution for a given mathematical model is well known and many solution approaches have been developed to identify efficiently an optimal solution in a given situation. For example, one such class of mathematical models with industrial applications have been classified as mathematical programming models (MPM). The main idea behind these … Read more

A New Sequential Optimality Condition for Constrained Nonsmooth Optimization

We introduce a sequential optimality condition for locally Lipschitz constrained nonsmooth optimization, verifiable just using derivative information, and which holds even in the absence of any constraint qualification. The proposed sequential optimality condition is not only novel for nonsmooth problems, but brings new insights for the smooth case as well. We present a practical algorithm … Read more

A gradient type algorithm with backward inertial steps for a nonconvex minimization

We investigate an algorithm of gradient type with a backward inertial step in connection with the minimization of a nonconvex differentiable function. We show that the generated sequences converge to a critical point of the objective function, if a regularization of the objective function satis es the Kurdyka-Lojasiewicz property. Further, we provide convergence rates for the … Read more

Over-Parameterized Deep Neural Networks Have No Strict Local Minima For Any Continuous Activations

In this paper, we study the loss surface of the over-parameterized fully connected deep neural networks. We prove that for any continuous activation functions, the loss function has no bad strict local minimum, both in the regular sense and in the sense of sets. This result holds for any convex and differentiable loss function, and … Read more

A new splitting method for monotone inclusions of three operators

In this article, we consider monotone inclusions in real Hilbert spaces and suggest a new splitting method. The associated monotone inclusions consist of the sum of one bounded linear monotone operator and one inverse strongly monotone operator and one maximal monotone operator. The new method, at each iteration, first implements one forward-backward step as usual … Read more

On local non-global minimizers of quadratic optimization problem with a single quadratic constraint

In this paper, we consider the nonconvex quadratic optimization problem with a single quadratic constraint. First we give a theoretical characterization of the local non-global minimizers. Then we extend the recent characterization of the global minimizer via a generalized eigenvalue problem to the local non-global minimizers. Finally, we use these results to derive an efficient … Read more

Strong IP Formulations Need Large Coefficients

The development of practically well-behaving integer programming formulations is an important aspect of solving linear optimization problems over a set $X \subseteq \{0,1\}^n$. In practice, one is often interested in strong integer formulations with additional properties, e.g., bounded coefficients to avoid numerical instabilities. This article presents a lower bound on the size of coefficients in … Read more