The exact worst-case convergence rate of the alternating direction method of multipliers

Recently, semidefinite programming performance estimation has been employed as a strong tool for the worst-case performance analysis of first order methods. In this paper, we derive new non-ergodic convergence rates for the alternating direction method of multipliers (ADMM) by using performance estimation. We give some examples which show the exactness of the given bounds. We … Read more

A Sparse Interior Point Method for Linear Programs arising in Discrete Optimal Transport

Discrete Optimal Transport problems give rise to very large linear programs (LP) with a particular structure of the constraint matrix. In this paper we present an interior point method (IPM) specialized for the LP originating from the Kantorovich Optimal Transport problem. Knowing that optimal solutions of such problems display a high degree of sparsity, we … Read more

Faster Lagrangian-based methods: a unified prediction-correction framework

Motivated by the prediction-correction framework constructed by He and Yuan [SIAM J. Numer. Anal. 50: 700-709, 2012], we propose a unified prediction-correction framework to accelerate Lagrangian-based methods. More precisely, for strongly convex optimization, general linearized Lagrangian method with indefinite proximal term, alternating direction method of multipliers (ADMM) with the step size of Lagrangian multiplier not … Read more

Wasserstein Logistic Regression with Mixed Features

Recent work has leveraged the popular distributionally robust optimization paradigm to combat overfitting in classical logistic regression. While the resulting classification scheme displays a promising performance in numerical experiments, it is inherently limited to numerical features. In this paper, we show that distributionally robust logistic regression with mixed (i.e., numerical and categorical) features, despite amounting … Read more

Accelerating Frank-Wolfe via Averaging Step Directions

The Frank-Wolfe method is a popular method in sparse constrained optimization, due to its fast per-iteration complexity. However, the tradeoff is that its worst case global convergence is comparatively slow, and importantly, is fundamentally slower than its flow rate–that is to say, the convergence rate is throttled by discretization error. In this work, we consider … Read more

A Proximal Gradient Method for Multi-objective Optimization Problems Using Bregman Functions

In this paper, a globally convergent proximal gradient method is developed for convex multi-objective optimization problems using Bregman distance. The proposed method is free from any kind of a priori chosen parameters or ordering information of objective functions. At every iteration of the proposed method, a subproblem is solved to find a descent direction. This … Read more

Duality in convex stochastic optimization

This paper studies duality and optimality conditions in general convex stochastic optimization problems introduced by Rockafellar and Wets in \cite{rw76}. We derive an explicit dual problem in terms of two dual variables, one of which is the shadow price of information while the other one gives the marginal cost of a perturbation much like in … Read more

Generalizations of doubly nonnegative cones and their comparison

In this study, we examine the various extensions of the doubly nonnegative (DNN) cone, frequently used in completely positive programming (CPP) to achieve a tighter relaxation than the positive semidefinite cone. To provide tighter relaxation for generalized CPP (GCPP) than the positive semidefinite cone, inner-approximation hierarchies of the generalized copositive cone are exploited to obtain … Read more

A generalized block-iterative projection method for the common fixed point problem induced by cutters

The block-iterative projections (BIP) method of Aharoni and Censor [Block-iterative projection methods for parallel computation of solutions to convex feasibility problems, Linear Algebra and its Applications 120, (1989), 165-175] is an iterative process for finding asymptotically a point in the nonempty intersection of a family of closed convex subsets. It employs orthogonal projections onto the … Read more

Convexity and continuity of specific set-valued maps and their extremal value functions

In this paper, we study several classes of set-valued maps, which can be used in set-valued optimization and its applications, and their respective maximum and minimum value functions. The definitions of these maps are based on scalar-valued, vector-valued, and cone-valued maps. Moreover, we consider those extremal value functions which are obtained when optimizing linear functionals … Read more