Asynchronous Stochastic Subgradient Methods for General Nonsmooth Nonconvex Optimization

Asynchronous distributed methods are a popular way to reduce the communication and synchronization costs of large-scale optimization. Yet, for all their success, little is known about their convergence guarantees in the challenging case of general non-smooth, non-convex objectives, beyond cases where closed-form proximal operator solutions are available. This is all the more surprising since these … Read more

Pathfollowing for Parametric Mathematical Programs with Complementarity Constraints

In this paper we study procedures for pathfollowing parametric mathematical pro- grams with complementarity constraints. We present two procedures, one based on the penalty approach to solving standalone MPCCs, and one based on tracing active set bifurcations aris- ing from doubly-active complementarity constraints. We demonstrate the performance of these approaches on a variety of examples … Read more

A Comparison of Nonsmooth, Nonconvex, Constrained Optimization Solvers for the Design of Time-Delay Compensators

We present a detailed set of performance comparisons of two state-of-the-art solvers for the application of designing time-delay compensators, an important problem in the field of robust control. Formulating such robust control mechanics as constrained optimization problems often involves objective and constraint functions that are both nonconvex and nonsmooth, both of which present significant challenges … Read more

A Subsampling Line-Search Method with Second-Order Results

In many contemporary optimization problems such as those arising in machine learning, it can be computationally challenging or even infeasible to evaluate an entire function or its derivatives. This motivates the use of stochastic algorithms that sample problem data, which can jeopardize the guarantees obtained through classical globalization techniques in optimization such as a trust … Read more

Second-order Guarantees of Distributed gradient Algorithms

We consider distributed smooth nonconvex unconstrained optimization over networks, modeled as a connected graph. We examine the behavior of distributed gradient-based algorithms near strict saddle points. Specifically, we establish that (i) the renowned Distributed Gradient Descent (DGD) algorithm likely converges to a neighborhood of a Second-order Stationary (SoS) solution; and (ii) the more recent class … Read more

A stochastic Levenberg-Marquardt method using random models with complexity results and application to data assimilation

Globally convergent variants of the Gauss-Newton algorithm are often the methods of choice to tackle nonlinear least-squares problems. Among such frameworks, Levenberg-Marquardt and trust-region methods are two well-established, similar paradigms. Both schemes have been studied when the Gauss-Newton model is replaced by a random model that is only accurate with a given probability. Trust-region schemes … Read more

A Shifted Primal-Dual Interior Method for Nonlinear Optimization

Interior methods provide an effective approach for the treatment of inequality constraints in nonlinearly constrained optimization. A new primal-dual interior method is proposed based on minimizing a sequence of shifted primal-dual penalty-barrier functions. Certain global convergence properties are established. In particular, it is shown that every limit point is either an infeasible stationary point, or … Read more

Convergence and Complexity Analysis of a Levenberg-Marquardt Algorithm for Inverse Problems

The Levenberg-Marquardt algorithm is one of the most popular algorithms for finding the solution of nonlinear least squares problems. Across different modified variations of the basic procedure, the algorithm enjoys global convergence, a competitive worst case iteration complexity rate, and a guaranteed rate of local convergence for both zero and nonzero small residual problems, under … Read more

Asynchronous Parallel Algorithms for Nonconvex Big-Data Optimization. Part I: Model and Convergence

We propose a novel asynchronous parallel algorithmic framework for the minimization of the sum of a smooth nonconvex function and a convex nonsmooth regularizer, subject to both convex and nonconvex constraints. The proposed framework hinges on successive convex approximation techniques and a novel probabilistic model that captures key elements of modern computational architectures and asynchronous … Read more

Asynchronous Parallel Algorithms for Nonconvex Big-Data Optimization. Part II: Complexity and Numerical Results

We present complexity and numerical results for a new asynchronous parallel algorithmic method for the minimization of the sum of a smooth nonconvex function and a convex nonsmooth regularizer, subject to both convex and nonconvex constraints. The proposed method hinges on successive convex approximation techniques and a novel probabilistic model that captures key elements of … Read more