Exact convergence rate of the last iterate in subgradient methods

\(\) We study the convergence of the last iterate in subgradient methods applied to the minimization of a nonsmooth convex function with bounded subgradients. We first introduce a proof technique that generalizes the standard analysis of subgradient methods. It is based on tracking the distance between the current iterate and a different reference point at … Read more

Inexact Direct-Search Methods for Bilevel Optimization Problems

In this work, we introduce new direct search schemes for the solution of bilevel optimization (BO) problems. Our methods rely on a fixed accuracy black box oracle for the lower-level problem, and deal both with smooth and potentially nonsmooth true objectives. We thus analyze for the first time in the literature direct search schemes in … Read more

On the Computation of Restricted Normal Cones

Restricted normal cones are of interest, for instance, in the theory of local error bounds, where they have recently been used to characterize the exis- tence of a constrained Lipschitzian error bound. In this paper, we establish rela- tions between two concepts for restricted normals. The first of these concepts was introduced in the late … Read more

Sharpness and well-conditioning of nonsmooth convex formulations in statistical signal recovery

\(\) We study a sample complexity vs. conditioning tradeoff in modern signal recovery problems where convex optimization problems are built from sampled observations. We begin by introducing a set of condition numbers related to sharpness in \(\ell_p\) or Schatten-p norms (\(p\in[1,2]\)) based on nonsmooth reformulations of a class of convex optimization problems, including sparse recovery, … Read more

Variational Theory and Algorithms for a Class of Asymptotically Approachable Nonconvex Problems

We investigate a class of composite nonconvex functions, where the outer function is the sum of univariate extended-real-valued convex functions and the inner function is the limit of difference-of-convex functions. A notable feature of this class is that the inner function may fail to be locally Lipschitz continuous. It covers a range of important yet … Read more

A new single-layer inverse-free fixed-time dynamical system for absolute value equations

In this technical note, a novel single-layer inverse-free fixed-time dynamical system (SIFDS) is proposed to address absolute value equations. The proposed SIFDS directly employs coefficient matrix and absolute value equation function that aims at circumventing matrix inverse operation and achieving fixed-time convergence. The equilibria of the proposed SIFDS is proved to be the unique solution … Read more

Behavior of Newton-type methods near critical solutions of nonlinear equations with semismooth derivatives

Having in mind singular solutions of smooth reformulations of complementarity problems, arising unavoidably when the solution in question violates strict complementarity, we study the behavior of Newton-type methods near singular solutions of nonlinear equations, assuming that the operator of the equation possesses a strongly semismooth derivative, but is not necessarily twice differentiable. These smoothness restrictions … Read more

Adaptive Importance Sampling Based Surrogation Methods for Bayesian Hierarchical Models, via Logarithmic Integral Optimization

We explore Maximum a Posteriori inference of Bayesian Hierarchical Models (BHMs) with intractable normalizers, which are increasingly prevalent in contemporary applications and pose computational challenges when combined with nonconvexity and nondifferentiability. To address these, we propose the Adaptive Importance Sampling-based Surrogation method, which efficiently handles nonconvexity and nondifferentiability while improving the sampling approximation of the … Read more

The complexity of first-order optimization methods from a metric perspective

A central tool for understanding first-order optimization algorithms is the Kurdyka-Lojasiewicz inequality. Standard approaches to such methods rely crucially on this inequality to leverage sufficient decrease conditions involving gradients or subgradients. However, the KL property fundamentally concerns not subgradients but rather “slope”, a purely metric notion. By highlighting this view, and avoiding any use of … Read more

First-Order Methods for Nonsmooth Nonconvex Functional Constrained Optimization with or without Slater Points

Constrained optimization problems where both the objective and constraints may be nonsmooth and nonconvex arise across many learning and data science settings. In this paper, we show a simple first-order method finds a feasible, ϵ-stationary point at a convergence rate of O(ϵ−4) without relying on compactness or Constraint Qualification (CQ). When CQ holds, this convergence is measured by … Read more