A novel algorithm for a broad class of nonconvex optimization problems

In this paper, we propose a new global optimization approach for solving nonconvex optimization problems in which the nonconvex components are sums of products of convex functions. A broad class of nonconvex problems can be written in this way, such as concave minimization problems, difference of convex problems, and fractional optimization problems. Our approach exploits … Read more

On Integrality in Semidefinite Programming for Discrete Optimization

It is well-known that by adding integrality constraints to the semidefinite programming (SDP) relaxation of the max-cut problem, the resulting integer semidefinite program is an exact formulation of the problem. In this paper we show similar results for a wide variety of discrete optimization problems for which SDP relaxations have been derived. Based on a … Read more

Behavior of Newton-type methods near critical solutions of nonlinear equations with semismooth derivatives

Having in mind singular solutions of smooth reformulations of complementarity problems, arising unavoidably when the solution in question violates strict complementarity, we study the behavior of Newton-type methods near singular solutions of nonlinear equations, assuming that the operator of the equation possesses a strongly semismooth derivative, but is not necessarily twice differentiable. These smoothness restrictions … Read more

Searching for Optimal Per-Coordinate Step-sizes with Multidimensional Backtracking

The backtracking line-search is an effective technique to automatically tune the step-size in smooth optimization. It guarantees similar performance to using the theoretically optimal step-size. Many approaches have been developed to instead tune per-coordinate step-sizes, also known as diagonal preconditioners, but none of the existing methods are provably competitive with the optimal per-coordinate stepsizes. We … Read more

Unboundedness and Infeasibility in Linear Bilevel Optimization: How to Overcome Unbounded Relaxations

Bilevel optimization problems are known to be challenging to solve in practice. In particular, the feasible set of a bilevel problem is, in general, non-convex, even for linear bilevel problems. In this work, we aim to develop a better understanding of the feasible set of linear bilevel problems. Specifically, we develop means by which to … Read more

Adaptive Importance Sampling Based Surrogation Methods for Bayesian Hierarchical Models, via Logarithmic Integral Optimization

We explore Maximum a Posteriori inference of Bayesian Hierarchical Models (BHMs) with intractable normalizers, which are increasingly prevalent in contemporary applications and pose computational challenges when combined with nonconvexity and nondifferentiability. To address these, we propose the Adaptive Importance Sampling-based Surrogation method, which efficiently handles nonconvexity and nondifferentiability while improving the sampling approximation of the … Read more

Stochastic Ratios Tracking Algorithm for Large Scale Machine Learning Problems

Many machine learning applications and tasks rely on the stochastic gradient descent (SGD) algorithm and its variants. Effective step length selection is crucial for the success of these algorithms, which has motivated the development of algorithms such as ADAM or AdaGrad. In this paper, we propose a novel algorithm for adaptive step length selection in … Read more

On the Number of Pivots of Dantzig’s Simplex Methods for Linear and Convex Quadratic Programs

Refining and extending works by Ye and Kitahara-Mizuno, this paper presents new results on the number of pivots of simplex-type methods for solving linear programs of the Leontief kind, certain linear complementarity problems of the P kind, and nonnegative constrained convex quadratic programs. Our results contribute to the further understanding of the complexity and efficiency … Read more

First-Order Methods for Nonsmooth Nonconvex Functional Constrained Optimization with or without Slater Points

Constrained optimization problems where both the objective and constraints may be nonsmooth and nonconvex arise across many learning and data science settings. In this paper, we show a simple first-order method finds a feasible, ϵ-stationary point at a convergence rate of O(ϵ−4) without relying on compactness or Constraint Qualification (CQ). When CQ holds, this convergence is measured by … Read more