An Inexact Primal-Dual Smoothing Framework for Large-Scale Non-Bilinear Saddle Point Problems

We develop an inexact primal-dual first-order smoothing framework to solve a class of non-bilinear saddle point problems with primal strong convexity. Compared with existing methods, our framework yields a significant improvement over the primal oracle complexity, while it has competitive dual oracle complexity. In addition, we consider the situation where the primal-dual coupling term has … Read more

New characterizations of Hoffman constants for systems of linear constraints

We give a characterization of the Hoffman constant of a system of linear constraints in $\R^n$ relative to a reference polyhedron $R\subseteq\R^n$. The reference polyhedron $R$ represents constraints that are easy to satisfy such as box constraints. In the special case $R = \R^n$, we obtain a novel characterization of the classical Hoffman constant. More … Read more

New inertial factors of the Krasnoselskii-Mann iteration

In this article, we consider the Krasnosel’ski\u{\i}-Mann iteration for approximating a fixed point of any given non-expansive operator in real Hilbert spaces, and we study an inertial version proposed by Maing\'{e} recently. As a result, we suggest new conditions on the inertial factors to ensure weak convergence. They are free of iterates and depend on … Read more

A Proximal Interior Point Algorithm with Applications to Image Processing

In this article, we introduce a new proximal interior point algorithm (PIPA). This algorithm is able to handle convex optimization problems involving various constraints where the objective function is the sum of a Lipschitz differentiable term and a possibly nonsmooth one. Each iteration of PIPA involves the minimization of a merit function evaluated for decaying … Read more

Tensor Methods for Minimizing Convex Functions with Hölder Continuous Higher-Order Derivatives

In this paper we study p-order methods for unconstrained minimization of convex functions that are p-times differentiable with $\nu$-Hölder continuous pth derivatives. We propose tensor schemes with and without acceleration. For the schemes without acceleration, we establish iteration complexity bounds of $\mathcal{O}\left(\epsilon^{-1/(p+\nu-1)}\right)$ for reducing the functional residual below a given $\epsilon\in (0,1)$. Assuming that $\nu$ … Read more

Stability Analysis for a Class of Sparse Optimization Problems

The sparse optimization problems arise in many areas of science and engineering, such as compressed sensing, image processing, statistical and machine learning. The $\ell_{0}$-minimization problem is one of such optimization problems, which is typically used to deal with signal recovery. The $\ell_{1}$-minimization method is one of the plausible approaches for solving the $\ell_{0}$-minimization problems, and … Read more

Non-Stationary First-Order Primal-Dual Algorithms with Fast Convergence Rates

In this paper, we propose two novel non-stationary first-order primal-dual algorithms to solve nonsmooth composite convex optimization problems. Unlike existing primal-dual schemes where the parameters are often fixed, our methods use pre-defined and dynamic sequences for parameters. We prove that our first algorithm can achieve O(1/k) convergence rate on the primal-dual gap, and primal and … Read more

An analysis of noise folding for low-rank matrix recovery

Previous work regarding low-rank matrix recovery has concentrated on the scenarios in which the matrix is noise-free and the measurements are corrupted by noise. However, in practical application, the matrix itself is usually perturbed by random noise preceding to measurement. This paper concisely investigates this scenario and evidences that, for most measurement schemes utilized in … Read more

General risk measures for robust machine learning

A wide array of machine learning problems are formulated as the minimization of the expectation of a convex loss function on some parameter space. Since the probability distribution of the data of interest is usually unknown, it is is often estimated from training sets, which may lead to poor out-of-sample performance. In this work, we … Read more

A linearly convergent stochastic recursive gradient method for convex optimization

The stochastic recursive gradient algorithm (SARAH) [8] attracts much interest recently. It admits a simple recursive framework for updating stochastic gradient estimates. Motivated by this, in this paper, we propose a SARAH-I method incorporating importance sampling, whose linear conver- gence rate of the sequence of distances between iterates and the optima set is proven under … Read more