The cost of nonconvexity in deterministic nonsmooth optimization

\(\) We study the impact of nonconvexity on the complexity of nonsmooth optimization, emphasizing objectives such as piecewise linear functions, which may not be weakly convex. We focus on a dimension-independent analysis, slightly modifying a black-box algorithm of Zhang et al. that approximates an $\epsilon$-stationary point of any directionally differentiable Lipschitz objective using $O(\epsilon^{-4})$ calls … Read more

The projective exact penalty method for general constrained optimization

A new projective exact penalty function method is proposed for the equivalent reduction of constrained optimization problems to nonsmooth unconstrained ones. In the method, the original objective function is extended to infeasible points by summing its value at the projection of an infeasible point on the feasible set with the distance to the projection. The … Read more

AN-SPS: Adaptive Sample Size Nonmonotone Line Search Spectral Projected Subgradient Method for Convex Constrained Optimization Problems

Article Download View AN-SPS: Adaptive Sample Size Nonmonotone Line Search Spectral Projected Subgradient Method for Convex Constrained Optimization Problems

Finite convergence of the inexact proximal gradient method to sharp minima

Attractive properties of subgradient methods, such as robust stability and linear convergence, has been emphasized when they are used to solve nonsmooth optimization problems with sharp minima [12, 13]. In this letter we extend the robustness results to the composite convex models and show that the basic proximal gradient algorithm under the presence of a … Read more

On the first order optimization methods in Deep Image Prior

Deep learning methods have state-of-the-art performances in many image restoration tasks. Their effectiveness is mostly related to the size of the dataset used for the training. Deep Image Prior (DIP) is an energy function framework which eliminates the dependency on the training set, by considering the structure of a neural network as an handcrafted prior … Read more

On Optimal Universal First-Order Methods for Minimizing Heterogeneous Sums

This work considers minimizing a convex sum of functions, each with potentially different structure ranging from nonsmooth to smooth, Lipschitz to non-Lipschitz. Nesterov’s universal fast gradient method provides an optimal black-box first-order method for minimizing a single function that takes advantage of any continuity structure present without requiring prior knowledge. In this paper, we show … Read more

An Improved Unconstrained Approach for Bilevel Optimization

In this paper, we focus on the nonconvex-strongly-convex bilevel optimization problem (BLO). In this BLO, the objective function of the upper-level problem is nonconvex and possibly nonsmooth, and the lower-level problem is smooth and strongly convex with respect to the underlying variable $y$. We show that the feasible region of BLO is a Riemannian manifold. … Read more

An adaptive superfast inexact proximal augmented Lagrangian method for smooth nonconvex composite optimization problems

This work presents an adaptive superfast proximal augmented Lagrangian (AS-PAL) method for solving linearly-constrained smooth nonconvex composite optimization problems. Each iteration of AS-PAL inexactly solves a possibly nonconvex proximal augmented Lagrangian (AL) subproblem obtained by an aggressive/adaptive choice of prox stepsize with the aim of substantially improving its computational performance followed by a full Lagrangian … Read more

Convergence to a second-order critical point of composite nonsmooth problems by a trust region method

An algorithm for finding a first-order and second-order critical point of composite nonsmooth problems is proposed in this paper. For smooth problems, algorithms for searching such a point usually utilize the so called negative-curvature directions. In this paper, the method recently proposed for nonlinear semidefinite problems by the current author is extended for solving general … Read more

Multi-fidelity robust controller design with gradient sampling

Robust controllers that stabilize dynamical systems even under disturbances and noise are often formulated as solutions of nonsmooth, nonconvex optimization problems. While methods such as gradient sampling can handle the nonconvexity and nonsmoothness, the costs of evaluating the objective function may be substantial, making robust control challenging for dynamical systems with high-dimensional state spaces. In … Read more