Iteration-complexity of a Jacobi-type non-Euclidean ADMM for multi-block linearly constrained nonconvex programs

This paper establishes the iteration-complexity of a Jacobi-type non-Euclidean proximal alternating direction method of multipliers (ADMM) for solving multi-block linearly constrained nonconvex programs. The subproblems of this ADMM variant can be solved in parallel and hence the method has great potential to solve large scale multi-block linearly constrained nonconvex programs. Moreover, our analysis allows the … Read more

Iteration-Complexity of a Linearized Proximal Multiblock ADMM Class for Linearly Constrained Nonconvex Optimization Problems

This paper analyzes the iteration-complexity of a class of linearized proximal multiblock alternating direction method of multipliers (ADMM) for solving linearly constrained nonconvex optimization problems. The subproblems of the linearized ADMM are obtained by partially or fully linearizing the augmented Lagrangian with respect to the corresponding minimizing block variable. The derived complexity bounds do not … Read more

A Novel Approach for Solving Convex Problems with Cardinality Constraints

In this paper we consider the problem of minimizing a convex differentiable function subject to sparsity constraints. Such constraints are non-convex and the resulting optimization problem is known to be hard to solve. We propose a novel generalization of this problem and demonstrate that it is equivalent to the original sparsity-constrained problem if a certain … Read more

Optimality condition and complexity analysis for linearly-constrained optimization without differentiability on the boundary

In this paper we consider the minimization of a continuous function that is potentially not differentiable or not twice differentiable on the boundary of the feasible region. By exploiting an interior point technique, we present first- and second-order optimality conditions for this problem that reduces to classical ones when the derivative on the boundary is … Read more

Convergence rate bounds for a proximal ADMM with over-relaxation stepsize parameter for solving nonconvex linearly constrained problems

This paper establishes convergence rate bounds for a variant of the proximal alternating direction method of multipliers (ADMM) for solving nonconvex linearly constrained optimization problems. The variant of the proximal ADMM allows the inclusion of an over-relaxation stepsize parameter belonging to the interval (0,2). To the best of our knowledge, all related papers in the … Read more

Complexity Analysis of a Trust Funnel Algorithm for Equality Constrained Optimization

A method is proposed for solving equality constrained nonlinear optimization problems involving twice continuously differentiable functions. The method employs a trust funnel approach consisting of two phases: a first phase to locate an $\epsilon$-feasible point and a second phase to seek optimality while maintaining at least $\epsilon$-feasibility. A two-phase approach of this kind based on … Read more

Global Convergence of ADMM in Nonconvex Nonsmooth Optimization

In this paper, we analyze the convergence of the alternating direction method of multipliers (ADMM) for minimizing a nonconvex and possibly nonsmooth objective function, $\phi(x_0,\ldots,x_p,y)$, subject to coupled linear equality constraints. Our ADMM updates each of the primal variables $x_0,\ldots,x_p,y$, followed by updating the dual variable. We separate the variable $y$ from $x_i$’s as it … Read more

A SMART Stochastic Algorithm for Nonconvex Optimization with Applications to Robust Machine Learning

Machine learning theory typically assumes that training data is unbiased and not adversarially generated. When real training data deviates from these assumptions, trained models make erroneous predictions, sometimes with disastrous effects. Robust losses, such as the huber norm are designed to mitigate the effects of such contaminated data, but they are limited to the regression … Read more

ALGORITHM XXX: SC-SR1: MATLAB SOFTWARE FOR SOLVING SHAPE-CHANGING L-SR1 TRUST-REGION SUBPROBLEMS

We present a MATLAB implementation of the shape-changing sym- metric rank-one (SC-SR1) method that solves trust-region subproblems when a limited-memory symmetric rank-one (L-SR1) matrix is used in place of the true Hessian matrix. The method takes advantage of two shape-changing norms [4, 3] to decompose the trust-region subproblem into two separate problems. Using one of … Read more

Stochastic Quasi-Newton Methods for Nonconvex Stochastic Optimization

In this paper we study stochastic quasi-Newton methods for nonconvex stochastic optimization, where we assume that noisy information about the gradients of the objective function is available via a stochastic first-order oracle ($\SFO$). We propose a general framework for such methods, for which we prove almost sure convergence to stationary points and analyze its worst-case … Read more