Inverse Parametric Optimization with an Application to Hybrid System Control

We present a number of results on inverse parametric optimization and its application to hybrid system control. We show that any function that can be written as the difference of two convex functions can also be written as a linear mapping of the solution to a convex parametric optimization problem. We exploit these results in … Read more

Multiperiod Portfolio Optimization with General Transaction Costs

We analyze the properties of the optimal portfolio policy for a multiperiod mean-variance investor facing multiple risky assets in the presence of general transaction costs such as proportional, market impact, and quadratic transaction costs. For proportional transaction costs, we find that a buy-and-hold policy is optimal: if the starting portfolio is outside a parallelogram-shaped no-trade … Read more

On Lower Complexity Bounds for Large-Scale Smooth Convex Optimization

In this note we present tight lower bounds on the information-based complexity of large-scale smooth convex minimization problems. We demonstrate, in particular, that the k-step Conditional Gradient (a.k.a. Frank-Wolfe) algorithm as applied to minimizing smooth convex functions over the n-dimensional box with n ≥ k is optimal, up to an O(ln n)-factor, in terms of … Read more

Convex relaxation for finding planted influential nodes in a social network

We consider the problem of maximizing influence in a social network. We focus on the case that the social network is a directed bipartite graph whose arcs join senders to receivers. We consider both the case of deterministic networks and probabilistic graphical models, that is, the so-called “cascade” model. The problem is to find the … Read more

A Generalized Proximal Point Algorithm and its Convergence Rate

We propose a generalized proximal point algorithm (PPA), in the generic setting of finding a zero point of a maximal monotone operator. In addition to the classical PPA, a number of benchmark operator splitting methods in PDE and optimization literatures such as the Douglas-Rachford splitting method, Peaceman-Rachford splitting method, alternating direction method of multipliers, generalized … Read more

New Analysis and Results for the Conditional Gradient Method

We present new results for the conditional gradient method (also known as the Frank-Wolfe method). We derive computational guarantees for arbitrary step-size sequences, which are then applied to various step-size rules, including simple averaging and constant step-sizes. We also develop step-size rules and computational guarantees that depend naturally on the warm-start quality of the initial … Read more

A Deterministic Rescaled Perceptron Algorithm

The perceptron algorithm is a simple iterative procedure for finding a point in a convex cone $F$. At each iteration, the algorithm only involves a query of a separation oracle for $F$ and a simple update on a trial solution. The perceptron algorithm is guaranteed to find a feasible point in $F$ after $\Oh(1/\tau_F^2)$ iterations, … Read more

On smoothness properties of optimal value functions at the boundary of their domain under complete convexity

This article studies continuity and directional differentiability properties of optimal value functions, in particular at boundary points of their domain. We extend and complement standard continuity results from W.W. Hogan, Point-to-set maps in mathematical programming, SIAM Review, Vol. 15 (1973), 591-603, for abstract feasible set mappings under complete convexity as well as standard differentiability results … Read more

Optimal parameter selection for the alternating direction method of multipliers (ADMM): quadratic problems

The alternating direction method of multipliers (ADMM) has emerged as a powerful technique for large-scale structured optimization. Despite many recent results on the convergence properties of ADMM, a quantitative characterization of the impact of the algorithm parameters on the convergence times of the method is still lacking. In this paper we find the optimal algorithm … Read more