A proximal-perturbed Bregman ADMM for solving nonsmooth and nonconvex optimization problems

In this paper, we focus on a linearly constrained composite minimization problem whose objective function is possibly nonsmooth and nonconvex. Unlike the traditional construction of augmented Lagrangian function, we provide a proximal-perturbed augmented Lagrangian and then develop a new Bregman Alternating Direction Method of Multipliers (ADMM). Under mild assumptions, we show that the novel augmented … Read more

An adaptive relaxation-refinement scheme for multi-objective mixed-integer nonconvex optimization

In this work, we present an algorithm for computing an enclosure for multi-objective mixed-integer nonconvex optimization problems. In contrast to existing solvers for this type of problem, this algorithm is not based on a branch-and-bound scheme but rather relies on a relax-and-refine approach. While this is an established technique in single-objective optimization, several adaptions to … Read more

Unifying nonlinearly constrained nonconvex optimization

Derivative-based iterative methods for nonlinearly constrained non-convex optimization usually share common algorithmic components, such as strategies for computing a descent direction and mechanisms that promote global convergence. Based on this observation, we introduce an abstract framework based on four common ingredients that describes most derivative-based iterative methods and unifies their workflows. We then present Uno, … Read more

Distributionally Robust Optimization with Decision-Dependent Polyhedral Ambiguity

We consider a two-stage stochastic program with continuous recourse, where the distribution of the random parameters depends on the decisions. Assuming a finite sample space, we study a distributionally robust approach to this problem, where the decision-dependent distributional ambiguity is modeled with a polyhedral ambiguity set. We consider cases where the recourse function and the … Read more

A Stochastic Objective-Function-Free Adaptive Regularization Method with Optimal Complexity

A fully stochastic second-order adaptive-regularization method for unconstrained nonconvex optimization is presented which never computes the objective-function value, but yet achieves the optimal $\mathcal{O}(\epsilon^{-3/2})$ complexity bound for finding first-order critical points. The method is noise-tolerant and the inexactness conditions required for convergence depend on the history of past steps. Applications to cases where derivative evaluation … Read more

A four-operator splitting algorithm for nonconvex and nonsmooth optimization

In this work, we address a class of nonconvex nonsmooth optimization problems where the objective function is the sum of two smooth functions (one of which is proximable) and two nonsmooth functions (one proper, closed and proximable, and the other continuous and weakly concave). We introduce a new splitting algorithm that extends the Davis-Yin splitting … Read more

Understanding the Douglas-Rachford splitting method through the lenses of Moreau-type envelopes

We analyze the Douglas-Rachford splitting method for weakly convex optimization problems, by the token of the Douglas-Rachford envelope, a merit function akin to the Moreau envelope. First, we use epi-convergence techniques to show that this artifact approximates the original objective function via epigraphs. Secondly, we present how global convergence and local linear convergence rates for … Read more

A Proximal-Gradient Method for Constrained Optimization

We present a new algorithm for solving optimization problems with objective functions that are the sum of a smooth function and a (potentially) nonsmooth regularization function, and nonlinear equality constraints. The algorithm may be viewed as an extension of the well-known proximal-gradient method that is applicable when constraints are not present. To account for nonlinear … Read more

Using Filter Methods to Guide Convergence for ADMM, with Applications to Nonnegative Matrix Factorization Problems

Nonconvex, nonlinear cost functions arise naturally in physical inverse problems and machine learning. The alternating direction method of multipliers (ADMM) has seen extensive use in these applications, despite exhibiting uncertain convergence behavior in many practical nonconvex settings, and struggling with general nonlinear constraints. In contrast, filter methods have proved effective in enforcing convergence for sequential … Read more

A novel algorithm for a broad class of nonconvex optimization problems

In this paper, we propose a new global optimization approach for solving nonconvex optimization problems in which the nonconvex components are sums of products of convex functions. A broad class of nonconvex problems can be written in this way, such as concave minimization problems, difference of convex problems, and fractional optimization problems. Our approach exploits … Read more