Doubly stochastic primal dual splitting algorithm with variance reduction for saddle point problems

The (structured) saddle-point problem involving the infimal convolution in real Hilbert spaces finds applicability in many applied mathematics disciplines. For this purpose, we develop a stochastic primal-dual splitting (PDS) algorithm with loopless variance-reduction (VR) for solving this generic problem. A PDS algorithm aims to overcome the well-known shortcomings of common splitting methods by solving the … Read more

Singular value half thresholding algorithm for lp regularized matrix optimization problems

In this paper, we study the low-rank matrix optimization problem, where the penalty term is the $\ell_p~(0<p<1)$ regularization. Inspired by the good performance of half thresholding function in sparse/low-rank recovery problems, we propose a singular value half thresholding (SVHT) algorithm to solve the $\ell_p$ regularized matrix optimization problem. The main iteration in SVHT algorithm makes … Read more

Near-optimal closed-loop method via Lyapunov damping for convex optimization

We introduce an autonomous system with closed-loop damping for first-order convex optimization. While, to this day, optimal rates of convergence are only achieved by non-autonomous methods via open-loop damping (e.g., Nesterov’s algorithm), we show that our system is the first one featuring a closed-loop damping while exhibiting a rate arbitrarily close to the optimal one. … Read more

Fixed point continuation algorithm with extrapolation for Schatten p-quasi-norm regularized matrix optimization problems

In this paper, we consider a general low-rank matrix optimization problem which is modeled by a general Schatten p-quasi-norm (${\rm 0<p<1}$) regularized matrix optimization. For this nonconvex nonsmooth and non-Lipschitz matrix optimization problem, based on the matrix p-thresholding operator, we first propose a fixed point continuation algorithm with extrapolation (FPCAe) for solving it. Secondly, we … Read more

An optimally fast objective-function-free minimization algorithm using random subspaces

Article Download View An optimally fast objective-function-free minimization algorithm using random subspaces

Continuous exact relaxation and alternating proximal gradient algorithm for partial sparse and partial group sparse optimization problems

In this paper, we consider a partial sparse and partial group sparse optimization problem, where the loss function is a continuously differentiable function (possibly nonconvex), and the penalty term consists of two parts associated with sparsity and group sparsity. The first part is the $\ell_0$ norm of ${\bf x}$, the second part is the $\ell_{2,0}$ … Read more

Accelerated Gradient Descent via Long Steps

Recently Grimmer [1] showed for smooth convex optimization by utilizing longer steps periodically, gradient descent’s state-of-the-art O(1/T) convergence guarantees can be improved by constant factors, conjecturing an accelerated rate strictly faster than O(1/T) could be possible. Here we prove such a big-O gain, establishing gradient descent’s first accelerated convergence rate in this setting. Namely, we … Read more

Using orthogonally structured positive bases for constructing positive k-spanning sets with cosine measure guarantees

\(\) Positive spanning sets span a given vector space by nonnegative linear combinations of their elements. These have attracted significant attention in recent years, owing to their extensive use in derivative-free optimization. In this setting, the quality of a positive spanning set is assessed through its cosine measure, a geometric quantity that expresses how well … Read more

Limited memory gradient methods for unconstrained optimization

The limited memory steepest descent method (Fletcher, 2012) for unconstrained optimization problems stores a few past gradients to compute multiple stepsizes at once. We review this method and propose new variants. For strictly convex quadratic objective functions, we study the numerical behavior of different techniques to compute new stepsizes. In particular, we introduce a method … Read more

Expected decrease for derivative-free algorithms using random subspaces

Derivative-free algorithms seek the minimum of a given function based only on function values queried at appropriate points. Although these methods are widely used in practice, their performance is known to worsen as the problem dimension increases. Recent advances in developing randomized derivative-free techniques have tackled this issue by working in low-dimensional subspaces that are … Read more