A momentum-based linearized augmented Lagrangian method for nonconvex constrained stochastic optimization

    Nonconvex constrained stochastic optimization  has emerged in many important application areas. With general functional constraints it minimizes the sum of an expectation function and a convex  nonsmooth  regularizer. Main challenges  arise due to the stochasticity in the random integrand and the possibly nonconvex functional constraints. To cope with these issues we propose a … Read more

Blessing of Nonconvexity in Deep Linear Models: Depth Flattens the Optimization Landscape Around the True Solution

This work characterizes the effect of depth on the optimization landscape of linear regression, showing that, despite their nonconvexity, deeper models have more desirable optimization landscape. We consider a robust and over-parameterized setting, where a subset of measurements are grossly corrupted with noise and the true linear model is captured via an $N$-layer linear neural … Read more

Hidden convexity in a class of optimization problems with bilinear terms

In this paper we identify a new class of nonconvex optimization problems that can be equivalently reformulated to convex ones. These nonconvex problems can be characterized by convex functions with bilinear arguments. We describe several examples of important applications that have this structure. A reformulation technique is presented which converts the problems in this class … Read more

Detecting negative eigenvalues of exact and approximate Hessian matrices in optimization

Nonconvex minimization algorithms often benefit from the use of second-order information as represented by the Hessian matrix. When the Hessian at a critical point possesses negative eigenvalues, the corresponding eigenvectors can be used to search for further improvement in the objective function value. Computing such eigenpairs can be computationally challenging, particularly if the Hessian matrix … Read more

Preconditioned Gradient Descent for Overparameterized Nonconvex Burer–Monteiro Factorization with Global Optimality Certification

We consider using gradient descent to minimize the nonconvex function $f(X)=\phi(XX^{T})$ over an $n\times r$ factor matrix $X$, in which $\phi$ is an underlying smooth convex cost function defined over $n\times n$ matrices. While only a second-order stationary point $X$ can be provably found in reasonable time, if $X$ is additionally \emph{rank deficient}, then its … Read more

Multi-fidelity robust controller design with gradient sampling

Robust controllers that stabilize dynamical systems even under disturbances and noise are often formulated as solutions of nonsmooth, nonconvex optimization problems. While methods such as gradient sampling can handle the nonconvexity and nonsmoothness, the costs of evaluating the objective function may be substantial, making robust control challenging for dynamical systems with high-dimensional state spaces. In … Read more

Worst-Case Complexity of TRACE with Inexact Subproblem Solutions for Nonconvex Smooth Optimization

An algorithm for solving nonconvex smooth optimization problems is proposed, analyzed, and tested. The algorithm is an extension of the Trust Region Algorithm with Contractions and Expansions (TRACE) [Math. Prog. 162(1):132, 2017]. In particular, the extension allows the algorithm to use inexact solutions of the arising subproblems, which is an important feature for solving large-scale … Read more

Global convergence and acceleration of fixed point iterations of union upper semicontinuous operators: proximal algorithms, alternating and averaged nonconvex projections, and linear complementarity problems

We propose a unified framework to analyze fixed point iterations of a set-valued operator that is the union of a finite number of upper semicontinuous maps, each with a nonempty closed domain and compact values. We discuss global convergence, local linear convergence under a calmness condition, and component identification, and further propose acceleration strategies that … Read more

Global Convergence of Sub-gradient Method for Robust Matrix Recovery: Small Initialization, Noisy Measurements, and Over-parameterization

In this work, we study the performance of sub-gradient method (SubGM) on a natural nonconvex and nonsmooth formulation of low-rank matrix recovery with $\ell_1$-loss, where the goal is to recover a low-rank matrix from a limited number of measurements, a subset of which may be grossly corrupted with noise. We study a scenario where the … Read more

A deterministic solver for multiobjective mixed-integer convex and nonconvex optimization

This paper proposes a general framework for solving multiobjective nonconvex optimization problems, i.e., optimization problems in which multiple objective functions have to be optimized simultaneously. Thereby, the nonconvexity might come from the objective or constraint functions, or from integrality conditions for some of the variables. In particular, multiobjective mixed-integer convex and nonconvex optimization problems are … Read more