HOGWILD!: A Lock-Free Approach to Parallelizing Stochastic Gradient Descent

Stochastic Gradient Descent (SGD) is a popular algorithm that can achieve state-of-the-art performance on a variety of machine learning tasks. Several researchers have recently proposed schemes to parallelize SGD, but all require performance-destroying memory locking and synchronization. This work aims to show using novel theoretical analysis, algorithms, and implementation that SGD can be implemented *without … Read more

Hidden convexity in partially separable optimization

The paper identifies classes of nonconvex optimization problems whose convex relaxations have optimal solutions which at the same time are global optimal solutions of the original nonconvex problems. Such a hidden convexity property was so far limited to quadratically constrained quadratic problems with one or two constraints. We extend it here to problems with some … Read more

Hidden convexity in partially separable optimization

The paper identifies classes of nonconvex optimization problems whose convex relaxations have optimal solutions which at the same time are global optimal solutions of the original nonconvex problems. Such a hidden convexity property was so far limited to quadratically constrained quadratic problems with one or two constraints. We extend it here to problems with some … Read more

Robust solutions of optimization problems affected by uncertain probabilities

In this paper we focus on robust linear optimization problems with uncertainty regions defined by phi-divergences (for example, chi-squared, Hellinger, Kullback-Leibler). We show how uncertainty regions based on phi-divergences arise in a natural way as confidence sets if the uncertain parameters contain elements of a probability vector. Such problems frequently occur in, for example, optimization … Read more

Multi-objective GRASP with path-relinking

In this paper we propose an adaptation of the GRASP metaheuristic to solve multi-objective combinatorial optimization problems. In particular we describe several alternatives to specialize the construction and improvement components of GRASP when two or more objectives are considered. GRASP has been successfully coupled with path-relinking for single-objective optimization. In this paper, we propose different … Read more

Accelerated Linearized Bregman Method

In this paper, we propose and analyze an accelerated linearized Bregman (ALB) method for solving the basis pursuit and related sparse optimization problems. This accelerated algorithm is based on the fact that the linearized Bregman (LB) algorithm is equivalent to a gradient descent method applied to a certain dual formulation. We show that the LB … Read more

Convex relaxations of chance constrained optimization problems

In this paper we develop convex relaxations of chance constrained optimization problems in order to obtain lower bounds on the optimal value. Unlike existing statistical lower bounding techniques, our approach is designed to provide deterministic lower bounds. We show that a version of the proposed scheme leads to a tractable convex relaxation when the chance … Read more

An Outcome Space Algorithm for Minimizing the Product of Two Convex Functions over a Convex Set

This paper presents an outcome-space outer approximation algorithm for solving the problem of minimizing the product of two convex functions over a compact convex set in $\R^n$. The computational experiences are reported. The proposed algorithm is convergent. ArticleDownload View PDF

Coordinated cutting plane generation via multi-objective separation

In cutting plane methods, the question of how to generate the “best possible” set of cuts is both central and crucial. We propose a lexicographic multi-objective cutting plane generation scheme that generates, among all the maximally violated valid inequalities of a given family, an inequality that is undominated and maximally diverse w.r.t. the cuts that … Read more

Optimal Newton-type methods for nonconvex smooth optimization problems

We consider a general class of second-order iterations for unconstrained optimization that includes regularization and trust-region variants of Newton’s method. For each method in this class, we exhibit a smooth, bounded-below objective function, whose gradient is globally Lipschitz continuous within an open convex set containing any iterates encountered and whose Hessian is $\alpha-$Holder continuous (for … Read more