Adaptive Third-Order Methods for Composite Convex Optimization

In this paper we propose third-order methods for composite convex optimization problems in which the smooth part is a three-times continuously differentiable function with Lipschitz continuous third-order derivatives. The methods are adaptive in the sense that they do not require the knowledge of the Lipschitz constant. Trial points are computed by the inexact minimization of … Read more

Stochastic trust-region and direct-search methods: A weak tail bound condition and reduced sample sizing

Using tail bounds, we introduce a new probabilistic condition for function estimation in stochastic derivative-free optimization which leads to a reduction in the number of samples and eases algorithmic analyses. Moreover, we develop simple stochastic direct-search and trust-region methods for the optimization of a potentially non-smooth function whose values can only be estimated via stochastic … Read more

Retraction based Direct Search Methods for Derivative Free Riemannian Optimization

Direct search methods represent a robust and reliable class of algorithms for solving black-box optimization problems. In this paper, we explore the application of those strategies to Riemannian optimization, wherein minimization is to be performed with respect to variables restricted to lie on a manifold. More specifically, we consider classic and line search extrapolated variants … Read more

Global convergence and acceleration of projection methods for feasibility problems involving union convex sets

We prove global convergence of classical projection algorithms for feasibility problems involving union convex sets, which refer to sets expressible as the union of a finite number of closed convex sets. We present a unified strategy for analyzing global convergence by means of studying fixed-point iterations of a set-valued operator that is the union of … Read more

New Penalized Stochastic Gradient Methods for Linearly Constrained Strongly Convex Optimization

For minimizing a strongly convex objective function subject to linear inequality constraints, we consider a penalty approach that allows one to utilize stochastic methods for problems with a large number of constraints and/or objective function terms. We provide upper bounds on the distance between the solutions to the original constrained problem and the penalty reformulations, … Read more

An inexact ADMM with proximal-indefinite term and larger stepsize

In this paper, an inexact Alternating Direction Method of Multipliers (ADMM) has been proposed for solving the two-block separable convex optimization problem subject to linear equality constraints. The first resulting subproblem is solved inexactly under relative error criterion, while another subproblem called regularization problem is solved inexactly by introducing an indefinite proximal term. Meanwhile, the … Read more

A superlinearly convergent subgradient method for sharp semismooth problems

Subgradient methods comprise a fundamental class of nonsmooth optimization algorithms. Classical results show that certain subgradient methods converge sublinearly for general Lipschitz convex functions and converge linearly for convex functions that grow sharply away from solutions. Recent work has moreover extended these results to certain nonconvex problems. In this work we seek to improve the … Read more

Stable Recovery of Sparse Signals With Non-convex Weighted $r$-Norm Minus $1$-Norm

Given the measurement matrix $A$ and the observation signal $y$, the central purpose of compressed sensing is to find the most sparse solution of the underdetermined linear system $y=Ax+z$, where $x$ is the $s$-sparse signal to be recovered and $z$ is the noise vector. Zhou and Yu \cite{Zhou and Yu 2019} recently proposed a novel … Read more

Graph topology invariant gradient and sampling complexity for decentralized and stochastic optimization

One fundamental problem in decentralized multi-agent optimization is the trade-off between gradient/sampling complexity and communication complexity. We propose new algorithms whose gradient and sampling complexities are graph topology invariant, while their communication complexities remain optimal. For convex smooth deterministic problems, we propose a primal dual sliding (PDS) algorithm that computes an $\epsilon$-solution with $O((\tilde{L}/\epsilon)^{1/2})$ gradient … Read more

Affine invariant convergence rates of the conditional gradient method

We show that the conditional gradient method for the convex composite problem \[\min_x\{f(x) + \Psi(x)\}\] generates primal and dual iterates with a duality gap converging to zero provided a suitable growth property holds and the algorithm makes a judicious choice of stepsizes. The rate of convergence of the duality gap to zero ranges from sublinear … Read more