A second order dynamical approach with variable damping to nonconvex smooth minimization

We investigate a second order dynamical system with variable damping in connection with the minimization of a nonconvex differentiable function. The dynamical system is formulated in the spirit of the differential equation which models Nesterov’s accelerated convex gradient method. We show that the generated trajectory converges to a critical point, if a regularization of the … Read more

Gradient Sampling Methods for Nonsmooth Optimization

This paper reviews the gradient sampling methodology for solving nonsmooth, nonconvex optimization problems. An intuitively straightforward gradient sampling algorithm is stated and its convergence properties are summarized. Throughout this discussion, we emphasize the simplicity of gradient sampling as an extension of the steepest descent method for minimizing smooth objectives. We then provide overviews of various … Read more

Golden Ratio Algorithms for Variational Inequalities

The paper presents a fully explicit algorithm for monotone variational inequalities. The method uses variable stepsizes that are computed using two previous iterates as an approximation of the local Lipschitz constant without running a linesearch. Thus, each iteration of the method requires only one evaluation of a monotone operator $F$ and a proximal mapping $g$. … Read more

Fast Multilevel Algorithms for Compressive Principle Component Pursuit

Recovering a low-rank matrix from highly corrupted measurements arises in compressed sensing of structured high-dimensional signals (e.g., videos and hyperspectral images among others). Robust principal component analysis (RPCA), solved via principal component pursuit (PCP), recovers a low-rank matrix from sparse corruptions that are of unknown value and support by decomposing the observation matrix into two … Read more

Block Coordinate Proximal Gradient Method for Nonconvex Optimization Problems: Convergence Analysis

We propose a block coordinate proximal gradient method for a composite minimization problem with two nonconvex function components in the objective while only one of them is assumed to be differentiable. Under some per-block Lipschitz-like conditions based on Bregman distance, but without the global Lipschitz continuity of the gradient of the differentiable function, we prove … Read more

Derivative-Free Superiorization With Component-Wise Perturbations

Superiorization reduces, not necessarily minimizes, the value of a target function while seeking constraints-compatibility. This is done by taking a solely feasibility-seeking algorithm, analyzing its perturbations resilience, and proactively perturbing its iterates accordingly to steer them toward a feasible point with reduced value of the target function. When the perturbation steps are computationally efficient, this … Read more

A Stochastic Semismooth Newton Method for Nonsmooth Nonconvex Optimization

In this work, we present a globalized stochastic semismooth Newton method for solving stochastic optimization problems involving smooth nonconvex and nonsmooth convex terms in the objective function. We assume that only noisy gradient and Hessian information of the smooth part of the objective function is available via calling stochastic first and second order oracles. The … Read more

Inexact Successive Quadratic Approximation for Regularized Optimization

Successive quadratic approximations, or second-order proximal methods, are useful for minimizing functions that are a sum of a smooth part and a convex, possibly nonsmooth part that promotes regularization. Most analyses of iteration complexity focus on the special case of proximal gradient method, or accelerated variants thereof. There have been only a few studies of … Read more

A Simple Nearly-Optimal Restart Scheme For Speeding-Up First-Order Methods

We present a simple scheme for restarting first-order methods for convex optimization problems. Restarts are made based only on achieving specified decreases in objective values, the specified amounts being the same for all optimization problems. Unlike existing restart schemes, the scheme makes no attempt to learn parameter values characterizing the structure of an optimization problem, … Read more

ADMM for Multiaffine Constrained Optimization

We propose an expansion of the scope of the alternating direction method of multipliers (ADMM). Specifically, we show that ADMM, when employed to solve problems with multiaffine constraints that satisfy certain easily verifiable assumptions, converges to the set of constrained stationary points if the penalty parameter in the augmented Lagrangian is sufficiently large. When the … Read more