Globally Convergent Levenberg-Marquardt Method For Phase Retrieval

In this paper, we consider a nonlinear least squares model for the phase retrieval problem. Since the Hessian matrix may not be positive definite and the Gauss-Newton (GN) matrix is singular at any optimal solution, we propose a modified Levenberg-Marquardt (LM) method, where the Hessian is substituted by a summation of the GN matrix and … Read more

A Multi-step Inertial Forward–Backward Splitting Method for Non-convex Optimization

In this paper, we propose a multi-step inertial Forward–Backward splitting algorithm for minimizing the sum of two non-necessarily convex functions, one of which is proper lower semi-continuous while the other is differentiable with a Lipschitz continuous gradient. We first prove global convergence of the scheme with the help of the Kurdyka-Lojasiewicz property. Then, when the … Read more

A fresh CP look at mixed-binary QPs: New formulations and relaxations

Triggered by Burer’s seminal characterization from 2009, many copositive (CP) reformulations of mixed-binary QPs have been discussed by now. Most of them can be used as proper relaxations, if the intractable co(mpletely )positive cones are replaced by tractable approximations. While the widely used approximation hierarchies have the disadvantage to use positive-semidefinite (psd) matrices of orders … Read more

Pessimistic bilevel linear optimization

In this paper, we investigate the pessimistic bilevel linear optimization problem (PBLOP). Based on the lower level optimal value function and duality, the PBLOP can be transformed to a single-level while nonconvex and nonsmooth optimization problem. By use of linear optimization duality, we obtain a tractable and equivalent transformation and propose algorithms for computing global … Read more

Gradient Descent only Converges to Minimizers

We show that gradient descent converges to a local minimizer, almost surely with random initialization. This is proved by applying the Stable Manifold Theorem from dynamical systems theory. Article Download View Gradient Descent only Converges to Minimizers

On the computation of convex envelopes for bivariate functions through KKT conditions

In this paper we exploit a slight variant of a result previously proved in [11] to define a procedure which delivers the convex envelope of some bivariate functions over polytopes. The procedure is based on the solution of a KKT system and simplifies the derivation of the convex envelope with respect to previously proposed techniques. … Read more

Conditional gradient type methods for composite nonlinear and stochastic optimization

In this paper, we present a conditional gradient type (CGT) method for solving a class of composite optimization problems where the objective function consists of a (weakly) smooth term and a (strongly) convex regularization term. While including a strongly convex term in the subproblems of the classical conditional gradient (CG) method improves its rate of … Read more

Global Convergence of ADMM in Nonconvex Nonsmooth Optimization

In this paper, we analyze the convergence of the alternating direction method of multipliers (ADMM) for minimizing a nonconvex and possibly nonsmooth objective function, $\phi(x_1,\ldots,x_p,y)$, subject to linear equality constraints that couple $x_1,\ldots,x_p,y$, where $p\ge 1$ is an integer. Our ADMM sequentially updates the primal variables in the order $x_1,\ldots,x_p,y$, followed by updating the dual … Read more

Blessing of Massive Scale: Spatial Graphical Model Estimation with a Total Cardinality Constraint

We consider the problem of estimating high dimensional spatial graphical models with a total cardinality constraint (i.e., the l0-constraint). Though this problem is highly nonconvex, we show that its primal-dual gap diminishes linearly with the dimensionality and provide a convex geometry justification of this ‘blessing of massive scale’ phenomenon. Motivated by this result, we propose … Read more

Generalized Uniformly Optimal Methods for Nonlinear Programming

In this paper, we present a generic framework to extend existing uniformly optimal convex programming algorithms to solve more general nonlinear, possibly nonconvex, optimization problems. The basic idea is to incorporate a local search step (gradient descent or Quasi-Newton iteration) into these uniformly optimal convex programming methods, and then enforce a monotone decreasing property of … Read more