Accelerating Frank-Wolfe via Averaging Step Directions

The Frank-Wolfe method is a popular method in sparse constrained optimization, due to its fast per-iteration complexity. However, the tradeoff is that its worst case global convergence is comparatively slow, and importantly, is fundamentally slower than its flow rate–that is to say, the convergence rate is throttled by discretization error. In this work, we consider … Read more

A Proximal Gradient Method for Multi-objective Optimization Problems Using Bregman Functions

In this paper, a globally convergent proximal gradient method is developed for convex multi-objective optimization problems using Bregman distance. The proposed method is free from any kind of a priori chosen parameters or ordering information of objective functions. At every iteration of the proposed method, a subproblem is solved to find a descent direction. This … Read more

Integral Global Optimality Conditions and an Algorithm for Multiobjective Problems

In this work, we propose integral global optimality conditions for multiobjective problems not necessarily differentiable. The integral characterization, already known for single objective problems, are extended to multiobjective problems by weighted sum and Chebyshev weighted scalarizations. Using this last scalarization, we propose an algorithm for obtaining an approximation of the weak Pareto front whose effectiveness … Read more

Identifiability, the KL property in metric spaces, and subgradient curves

Identifiability, and the closely related idea of partial smoothness, unify classical active set methods and more general notions of solution structure. Diverse optimization algorithms generate iterates in discrete time that are eventually confined to identifiable sets. We present two fresh perspectives on identifiability. The first distills the notion to a simple metric property, applicable not … Read more

Duality in convex stochastic optimization

This paper studies duality and optimality conditions in general convex stochastic optimization problems introduced by Rockafellar and Wets in \cite{rw76}. We derive an explicit dual problem in terms of two dual variables, one of which is the shadow price of information while the other one gives the marginal cost of a perturbation much like in … Read more

Generalizations of doubly nonnegative cones and their comparison

In this study, we examine the various extensions of the doubly nonnegative (DNN) cone, frequently used in completely positive programming (CPP) to achieve a tighter relaxation than the positive semidefinite cone. To provide tighter relaxation for generalized CPP (GCPP) than the positive semidefinite cone, inner-approximation hierarchies of the generalized copositive cone are exploited to obtain … Read more

Accelerating nuclear-norm regularized low-rank matrix optimization through Burer-Monteiro decomposition

This work proposes a rapid algorithm, BM-Global, for nuclear-norm-regularized convex and low-rank matrix optimization problems. BM-Global efficiently decreases the objective value via low-cost steps leveraging the nonconvex but smooth Burer-Monteiro (BM) decomposition, while effectively escapes saddle points and spurious local minima ubiquitous in the BM form to obtain guarantees of fast convergence rates to the … Read more

A nearly linearly convergent first-order method for nonsmooth functions with quadratic growth

Classical results show that gradient descent converges linearly to minimizers of smooth strongly convex functions. A natural question is whether there exists a locally nearly linearly convergent method for nonsmooth functions with quadratic growth. This work designs such a method for a wide class of nonsmooth and nonconvex locally Lipschitz functions, including max-of-smooth, Shapiro’s decomposable … Read more

Error Bound and Isocost Imply Linear Convergence of DCA-based Algorithms to D-stationarity

We consider a class of structured nonsmooth difference-of-convex minimization, which can be written as the difference of two convex functions possibly nonsmooth with the second one in the format of the maximum of a finite convex smooth functions. We propose two extrapolation proximal difference-of-convex based algorithms for potential acceleration to converge to a weak/standard d-stationary … Read more

Improved RIP-Based Bounds for Guaranteed Performance of Two Compressed Sensing Algorithms

Iterative hard thresholding (IHT) and compressive sampling matching pursuit (CoSaMP) are two mainstream compressed sensing algorithms using the hard thresholding operator. The guaranteed performance of the two algorithms for signal recovery was mainly analyzed in terms of the restricted isometry property (RIP) of sensing matrices. At present, the best known bound using RIP of order … Read more