Generalized Conjugate Gradient Methods for $\ell_1$ Regularized Convex Quadratic Programming with Finite Convergence

The conjugate gradient (CG) method is an efficient iterative method for solving large-scale strongly convex quadratic programming (QP). In this paper we propose some generalized CG (GCG) methods for solving the $\ell_1$-regularized (possibly not strongly) convex QP that terminate at an optimal solution in a finite number of iterations. At each iteration, our methods first … Read more

Acceleration of the PDHGM on strongly convex subspaces

We propose several variants of the primal-dual method due to Chambolle and Pock. Without requiring full strong convexity of the objective functions, our methods are accelerated on subspaces with strong convexity. This yields mixed rates, $O(1/N^2)$ with respect to initialisation and $O(1/N)$ with respect to the dual sequence, and the residual part of the primal … Read more

Sparse Recovery via Partial Regularization: Models, Theory and Algorithms

In the context of sparse recovery, it is known that most of existing regularizers such as $\ell_1$ suffer from some bias incurred by some leading entries (in magnitude) of the associated vector. To neutralize this bias, we propose a class of models with partial regularizers for recovering a sparse solution of a linear system. We … Read more

Random Multi-Constraint Projection: Stochastic Gradient Methods for Convex Optimization with Many Constraints

Consider convex optimization problems subject to a large number of constraints. We focus on stochastic problems in which the objective takes the form of expected values and the feasible set is the intersection of a large number of convex sets. We propose a class of algorithms that perform both stochastic gradient descent and random feasibility … Read more

First and second order optimality conditions for piecewise smooth objective functions

Any piecewise smooth function that is specified by an evaluation procedures involving smooth elemental functions and piecewise linear functions like min and max can be represented in the so-called abs-normal form. By an extension of algorithmic, or automatic differentiation, one can then compute certain first and second order derivative vectors and matrices that represent a … Read more

An Extended Frank-Wolfe Method with “In-Face” Directions, and its Application to Low-Rank Matrix Completion

We present an extension of the Frank-Wolfe method that is designed to induce near-optimal solutions on low-dimensional faces of the feasible region. We present computational guarantees for the method that trade off efficiency in computing near-optimal solutions with upper bounds on the dimension of minimal faces of iterates. We apply our method to the low-rank … Read more

Relationships between constrained and unconstrained multi-objective optimization and application in location theory

This article deals with constrained multi-objective optimization problems. The main purpose of the article is to investigate relationships between constrained and unconstrained multi-objective optimization problems. Under suitable assumptions (e.g., generalized convexity assumptions) we derive a characterization of the set of (strictly, weakly) efficient solutions of a constrained multi-objective optimization problem using characterizations of the sets … Read more

The rate of convergence of Nesterov’s accelerated forward-backward method is actually (k^{-2})$

The {\it forward-backward algorithm} is a powerful tool for solving optimization problems with a {\it additively separable} and {\it smooth} + {\it nonsmooth} structure. In the convex setting, a simple but ingenious acceleration scheme developed by Nesterov has been proved useful to improve the theoretical rate of convergence for the function values from the standard … Read more

Fast convergence of inertial dynamics and algorithms with asymptotic vanishing damping

In a Hilbert space setting $\mathcal H$, we study the fast convergence properties as $t \to + \infty$ of the trajectories of the second-order differential equation \begin{equation*} \ddot{x}(t) + \frac{\alpha}{t} \dot{x}(t) + \nabla \Phi (x(t)) = g(t), \end{equation*} where $\nabla\Phi$ is the gradient of a convex continuously differentiable function $\Phi: \mathcal H \to \mathbb R$, … Read more

A Stochastic Electricity Market Clearing Formulation with Consistent Pricing Properties

We argue that deterministic market clearing formulations introduce arbitrary distortions between day-ahead and expected real-time prices that bias economic incentives and block diversi cation. We extend and analyze the stochastic clearing formulation proposed by Pritchard et al. (2010) in which the social surplus function induces penalties between day-ahead and real-time quantities. We prove that the formulation … Read more