An Inexact Successive Quadratic Approximation Method for Convex L-1 Regularized Optimization

We study a Newton-like method for the minimization of an objective function $\phi$ that is the sum of a smooth convex function and an $\ell_1$ regularization term. This method, which is sometimes referred to in the literature as a proximal Newton method, computes a step by minimizing a piecewise quadratic model $q_k$ of the objective … Read more

Large-scale optimization with the primal-dual column generation method

The primal-dual column generation method (PDCGM) is a general-purpose column generation technique that relies on the primal-dual interior point method to solve the restricted master problems. The use of this interior point method variant allows to obtain suboptimal and well-centered dual solutions which naturally stabilizes the column generation. A reduction in the number of calls … Read more

On the Coupled Continuous Knapsack Problems: Projection Onto the Volume Constrained Gibbs N-Simplex

Coupled continuous quadratic knapsack problems (CCK) are introduced in the present study. The solution of a CCK problem is equivalent to the projection of an arbitrary point onto the volume constrained Gibbs N-simplex, which has a wide range of applications in computational science and engineering. Three algorithms have been developed in the present study to … Read more

Separable Approximations and Decomposition Methods for the Augmented Lagrangian

In this paper we study decomposition methods based on separable approximations for minimizing the augmented Lagrangian. In particular, we study and compare the Diagonal Quadratic Approximation Method (DQAM) of Mulvey and Ruszczy\'{n}ski and the Parallel Coordinate Descent Method (PCDM) of Richt\'{a}rik and Tak\'{a}\v{c}. We show that the two methods are equivalent for feasibility problems up … Read more

Inexact Coordinate Descent: Complexity and Preconditioning

In this paper we consider the problem of minimizing a convex function using a randomized block coordinate descent method. One of the key steps at each iteration of the algorithm is determining the update to a block of variables. Existing algorithms assume that in order to compute the update, a particular subproblem is solved exactly. … Read more

A Generalized Proximal Point Algorithm and its Convergence Rate

We propose a generalized proximal point algorithm (PPA), in the generic setting of finding a zero point of a maximal monotone operator. In addition to the classical PPA, a number of benchmark operator splitting methods in PDE and optimization literatures such as the Douglas-Rachford splitting method, Peaceman-Rachford splitting method, alternating direction method of multipliers, generalized … Read more

The Complexity of Large-scale Convex Programming under a Linear Optimization Oracle

This paper considers a general class of iterative optimization algorithms, referred to as linear-optimization-based convex programming (LCP) methods, for solving large-scale convex programming (CP) problems. The LCP methods, covering the classic conditional gradient (CG) method (a.k.a., Frank-Wolfe method) as a special case, can only solve a linear optimization subproblem at each iteration. In this paper, … Read more

On full Jacobian decomposition of the augmented Lagrangian method for separable convex programming

The augmented Lagrangian method (ALM) is a benchmark for solving the convex minimization problem with linear constraints. We consider the special case where the objective is in form of the sum of m functions without coupled variables. For solving this separable convex programming model, it is usually required to decompose the ALM subproblem at each … Read more

Tail bounds for stochastic approximation

Stochastic-approximation gradient methods are attractive for large-scale convex optimization because they offer inexpensive iterations. They are especially popular in data-fitting and machine-learning applications where the data arrives in a continuous stream, or it is necessary to minimize large sums of functions. It is known that by appropriately decreasing the variance of the error at each … Read more

Universal gradient methods for convex optimization problems

In this paper, we present new methods for black-box convex minimization. They do not need to know in advance the actual level of smoothness of the objective function. Their only essential input parameter is the required accuracy of the solution. At the same time, for each particular problem class they automatically ensure the best possible … Read more