An Algorithmic Framework of Generalized Primal-Dual Hybrid Gradient Methods for Saddle Point Problems

The primal-dual hybrid gradient method (PDHG) originates from the Arrow-Hurwicz method, and it has been widely used to solve saddle point problems, particularly in image processing areas. With the introduction of a combination parameter, Chambolle and Pock proposed a generalized PDHG scheme with both theoretical and numerical advantages. It has been analyzed that except for … Read more

Parallel Scenario Decomposition of Risk Averse 0-1 Stochastic Programs

In this paper, we extend a recently proposed scenario decomposition algorithm (Ahmed (2013)) for risk-neutral 0-1 stochastic programs to the risk-averse setting. Specifically, we consider risk-averse 0-1 stochastic programs with objective functions based on coherent risk measures. Using a dual representation of a coherent risk measure, we first derive an equivalent minimax reformulation of the … Read more

Scenario Set Partition Dual Bounds for Multistage Stochastic Programming: A Hierarchy of Bounds and a Partition Sampling Approach

We consider multistage stochastic programming problems in which the random parameters have finite support, leading to optimization over a finite scenario set. We propose a hierarchy of bounds based on partitions of the scenario set into subsets of (nearly) equal cardinality. These expected partition (EP) bounds coincide with EGSO bounds provided by Sandikci et al. … Read more

Satisficing Models under Uncertainty

Satisficing, as an approach to decision-making under uncertainty, aims at achieving solutions that satisfy the problem’s constraints as well as possible. Mathematical optimization problems that are related to this form of decision-making include the P-model of Charnes and Cooper (1963). In this paper, we propose a general framework of satisficing decision criteria, and show a … Read more

Non-asymptotic confidence bounds for the optimal value of a stochastic program

We discuss a general approach to building non-asymptotic confidence bounds for stochastic optimization problems. Our principal contribution is the observation that a Sample Average Approximation of a problem supplies upper and lower bounds for the optimal value of the problem which are essentially better than the quality of the corresponding optimal solutions. At the same … Read more

A Riemannian rank-adaptive method for low-rank optimization

This paper presents an algorithm that solves optimization problems on a matrix manifold $\mathcal{M} \subseteq \mathbb{R}^{m \times n}$ with an additional rank inequality constraint. The algorithm resorts to well-known Riemannian optimization schemes on fixed-rank manifolds, combined with new mechanisms to increase or decrease the rank. The convergence of the algorithm is analyzed and a weighted … Read more

A Practical Price Optimization Approach for Omni-channel Retailing

Consumers are increasingly navigating across sales channels to make purchases. The common retail practice of pricing channels independently is unable to achieve the desired profitable coordination required between channels. As part of a joint partnership agreement with IBM Commerce, we engaged with three major retailers over two years, and developed advanced omni-channel pricing (OCP) solutions … Read more

Fast convex optimization via inertial dynamics with Hessian driven damping

We first study the fast minimization properties of the trajectories of the second-order evolution equation \begin{equation*} \ddot{x}(t) + \frac{\alpha}{t} \dot{x}(t) + \beta \nabla^2 \Phi (x(t))\dot{x} (t) + \nabla \Phi (x(t)) = 0, \end{equation*} where $\Phi : \mathcal H \to \mathbb R$ is a smooth convex function acting on a real Hilbert space $\mathcal H$, and … Read more

A new algorithm for solving planar multiobjective location problems involving the Manhattan norm

This paper is devoted to the study of unconstrained planar multiobjective location problems, where distances between points are defined by means of the Manhattan norm. By identifying all nonessential objectives, we develop an effective algorithm for generating the whole set of efficient solutions. We prove the correctness of this algorithm and present some computational results, … Read more

Online Learning for Strong Branching Approximation in Branch-and-Bound

We present an online learning approach to variable branching in branch-and-bound for mixed-integer linear problems. Our approach consists in learning strong branching scores in an online fashion and in using them to take branching decisions. More specifically, numerical scores are used to rank the branching candidates. If, for a given variable, the learned approximation is … Read more