Stochastic trust-region and direct-search methods: A weak tail bound condition and reduced sample sizing

Using tail bounds, we introduce a new probabilistic condition for function estimation in stochastic derivative-free optimization which leads to a reduction in the number of samples and eases algorithmic analyses. Moreover, we develop simple stochastic direct-search and trust-region methods for the optimization of a potentially non-smooth function whose values can only be estimated via stochastic … Read more

Retraction based Direct Search Methods for Derivative Free Riemannian Optimization

Direct search methods represent a robust and reliable class of algorithms for solving black-box optimization problems. In this paper, we explore the application of those strategies to Riemannian optimization, wherein minimization is to be performed with respect to variables restricted to lie on a manifold. More specifically, we consider classic and line search extrapolated variants … Read more

Global convergence and acceleration of projection methods for feasibility problems involving union convex sets

We prove global convergence of classical projection algorithms for feasibility problems involving union convex sets, which refer to sets expressible as the union of a finite number of closed convex sets. We present a unified strategy for analyzing global convergence by means of studying fixed-point iterations of a set-valued operator that is the union of … Read more

Stable Recovery of Sparse Signals With Non-convex Weighted $r$-Norm Minus $1$-Norm

Given the measurement matrix $A$ and the observation signal $y$, the central purpose of compressed sensing is to find the most sparse solution of the underdetermined linear system $y=Ax+z$, where $x$ is the $s$-sparse signal to be recovered and $z$ is the noise vector. Zhou and Yu \cite{Zhou and Yu 2019} recently proposed a novel … Read more

Unmatched Preconditioning of the Proximal Gradient Algorithm

This works addresses the resolution of penalized least-squares problems using the proximal gradient algorithm (PGA). It is known that PGA can be accelerated by preconditioning strategies. However, typical effective choices of preconditioners may correspond to intricate matrices that are not easily inverted, and lead to an increased complexity in the computation of the proximity step. … Read more

A nested primal–dual FISTA-like scheme for composite convex optimization problems

We propose a nested primal–dual algorithm with extrapolation on the primal variable suited for minimizing the sum of two convex functions, one of which is continuously differentiable. The proposed algorithm can be interpreted as an inexact inertial forward–backward algorithm equipped with a prefixed number of inner primal–dual iterations for the proximal evaluation and a “warm–start” … Read more

Training Structured Neural Networks Through Manifold Identification and Variance Reduction

This paper proposes an algorithm, RMDA, for training neural networks (NNs) with a regularization term for promoting desired structures. RMDA does not incur computation additional to proximal SGD with momentum, and achieves variance reduction without requiring the objective function to be of the finite-sum form. Through the tool of manifold identification from nonlinear optimization, we … Read more

An active signature method for constrained abs-linear minimization

In this paper we consider the solution of optimization tasks with a piecewise linear objective function and piecewise linear constraints. First, we state optimality conditions for that class of problems using the abs-linearization approach and prove that they can be verified in polynomial time. Subsequently, we propose an algorithm called Constrained Active Signature Method that … Read more

Survey Descent: A Multipoint Generalization of Gradient Descent for Nonsmooth Optimization

For strongly convex objectives that are smooth, the classical theory of gradient descent ensures linear convergence relative to the number of gradient evaluations. An analogous nonsmooth theory is challenging. Even when the objective is smooth at every iterate, the corresponding local models are unstable and the number of cutting planes invoked by traditional remedies is … Read more

Analysis non-sparse recovery for non-convex relaxed $\ell_q$ minimization

This paper studies construction of signals, which are sparse or nearly sparse with respect to a tight frame $D$ from underdetermined linear systems. In the paper, we propose a non-convex relaxed $\ell_q(0 Article Download View Analysis non-sparse recovery for non-convex relaxed $ell_q$ minimization