Black-box Optimization Algorithms for Regularized Least-squares Problems

We consider the problem of optimizing the sum of a smooth, nonconvex function for which derivatives are unavailable, and a convex, nonsmooth function with easy-to-evaluate proximal operator. Of particular focus is the case where the smooth part has a nonlinear least-squares structure. We adapt two existing approaches for derivative-free optimization of nonsmooth compositions of smooth … Read more

A progressive decoupling algorithm for minimizing the difference of convex and weakly convex functions over a linear subspace

Commonly, decomposition and splitting techniques for optimization problems strongly depend on convexity. Implementable splitting methods for nonconvex and nonsmooth optimization problems are scarce and often lack convergence guarantees. Among the few exceptions is the Progressive Decoupling Algorithm (PDA), which has local convergence should convexity be elicitable. In this work, we furnish PDA with a descent … Read more

A four-operator splitting algorithm for nonconvex and nonsmooth optimization

\(\) In this work, we address a class of nonconvex nonsmooth optimization problems where the objective function is the sum of two smooth functions (one of which is proximable) and two nonsmooth functions (one proper, closed and proximable, and the other continuous and weakly concave). We introduce a new splitting algorithm that extends the Davis-Yin … Read more

Efficient Low-rank Identification via Accelerated Iteratively Reweighted Nuclear Norm Minimization

\(\) This paper considers the problem of minimizing the sum of a smooth function and the Schatten-\(p\) norm of the matrix. Our contribution involves proposing accelerated iteratively reweighted nuclear norm methods designed for solving the nonconvex low-rank minimization problem. Two major novelties characterize our approach. Firstly, the proposed method possesses a rank identification property, enabling … Read more

Composite optimization models via proximal gradient method with a novel enhanced adaptive stepsize

We first consider the convex composite optimization models with the local Lipschitzness condition imposed on the gradient of the differentiable term. The classical proximal gradient method will be studied with our novel enhanced adaptive stepsize selection. To obtain the convergence of the proposed algorithm, we establish a sufficient decrease type inequality associated with our new … Read more

Lipschitz minimization and the Goldstein modulus

Goldstein’s 1977 idealized iteration for minimizing a Lipschitz objective fixes a distance – the step size – and relies on a certain approximate subgradient. That “Goldstein subgradient” is the shortest convex combination of objective gradients at points within that distance of the current iterate. A recent implementable Goldstein-style algorithm allows a remarkable complexity analysis (Zhang … Read more

Subgradient Convergence Implies Subdifferential Convergence on Weakly Convex Functions: With Uniform Rates Guarantees

In nonsmooth, nonconvex stochastic optimization, understanding the uniform convergence of subdifferential mappings is crucial for analyzing stationary points of sample average approximations of risk as they approach the population risk. Yet, characterizing this convergence remains a fundamental challenge. This work introduces a novel perspective by connecting the uniform convergence of subdifferential mappings to that of subgradient … Read more

Approaches to iterative algorithms for solving nonlinear equations with an application in tomographic absorption spectroscopy

In this paper we propose an approach for solving systems of nonlinear equations without computing function derivatives. Motivated by the application area of tomographic absorption spectroscopy, which is a highly-nonlinear problem with variables coupling, we consider a situation where straightforward translation to a fixed point problem is not possible because the operators that represent the … Read more

Understanding the Douglas-Rachford splitting method through the lenses of Moreau-type envelopes

We analyze the Douglas-Rachford splitting method for weakly convex optimization problems, by the token of the Douglas-Rachford envelope, a merit function akin to the Moreau envelope. First, we use epi-convergence techniques to show that this artifact approximates the original objective function via epigraphs. Secondly, we present how global convergence and local linear convergence rates for … Read more

Second-Order Strong Optimality and Second-Order Duality for Nonsmooth Constrained Multiobjective Fractional Programming Problems

\(\) This paper investigates constrained nonsmooth multiobjective fractional programming problem (NMFP) in real Banach spaces. It derives a quotient calculus rule for computing the first- and second-order Clarke derivatives of fractional functions involving locally Lipschitz functions. A novel second-order Abadie-type regularity condition is presented, defined with the help of the Clarke directional derivative and the … Read more