On the set-semidefinite representation of nonconvex quadratic programs over arbitrary feasible sets

In the paper we prove that any nonconvex quadratic problem over some set $K\subset \mathbb{R}^n$ with additional linear and binary constraints can be rewritten as linear problem over the cone, dual to the cone of K-semidefinite matrices. We show that when K is defined by one quadratic constraint or by one concave quadratic constraint and … Read more

Implementing the simplex method as a cutting-plane method

We show that the simplex method can be interpreted as a cutting-plane method, assumed that a special pricing rule is used. This approach is motivated by the recent success of the cutting-plane method in the solution of special stochastic programming problems. We compare the classic Dantzig pricing rule and the rule that derives from the … Read more

HOGWILD!: A Lock-Free Approach to Parallelizing Stochastic Gradient Descent

Stochastic Gradient Descent (SGD) is a popular algorithm that can achieve state-of-the-art performance on a variety of machine learning tasks. Several researchers have recently proposed schemes to parallelize SGD, but all require performance-destroying memory locking and synchronization. This work aims to show using novel theoretical analysis, algorithms, and implementation that SGD can be implemented *without … Read more

A Nonlinear Conjugate Gradient Algorithm with An Optimal Property and An Improved Wolfe Line Search

In this paper, we seek the conjugate gradient direction closest to the direction of the scaled memoryless BFGS method and propose a family of conjugate gradient methods for unconstrained optimization. An improved Wolfe line search is also proposed, which can avoid a numerical drawback of the Wolfe line search and guarantee the global convergence of … Read more

Accelerated Linearized Bregman Method

In this paper, we propose and analyze an accelerated linearized Bregman (ALB) method for solving the basis pursuit and related sparse optimization problems. This accelerated algorithm is based on the fact that the linearized Bregman (LB) algorithm is equivalent to a gradient descent method applied to a certain dual formulation. We show that the LB … Read more

Multi-objective GRASP with path-relinking

In this paper we propose an adaptation of the GRASP metaheuristic to solve multi-objective combinatorial optimization problems. In particular we describe several alternatives to specialize the construction and improvement components of GRASP when two or more objectives are considered. GRASP has been successfully coupled with path-relinking for single-objective optimization. In this paper, we propose different … Read more

Robust solutions of optimization problems affected by uncertain probabilities

In this paper we focus on robust linear optimization problems with uncertainty regions defined by phi-divergences (for example, chi-squared, Hellinger, Kullback-Leibler). We show how uncertainty regions based on phi-divergences arise in a natural way as confidence sets if the uncertain parameters contain elements of a probability vector. Such problems frequently occur in, for example, optimization … Read more

Hidden convexity in partially separable optimization

The paper identifies classes of nonconvex optimization problems whose convex relaxations have optimal solutions which at the same time are global optimal solutions of the original nonconvex problems. Such a hidden convexity property was so far limited to quadratically constrained quadratic problems with one or two constraints. We extend it here to problems with some … Read more

Hidden convexity in partially separable optimization

The paper identifies classes of nonconvex optimization problems whose convex relaxations have optimal solutions which at the same time are global optimal solutions of the original nonconvex problems. Such a hidden convexity property was so far limited to quadratically constrained quadratic problems with one or two constraints. We extend it here to problems with some … Read more

An Outcome Space Algorithm for Minimizing the Product of Two Convex Functions over a Convex Set

This paper presents an outcome-space outer approximation algorithm for solving the problem of minimizing the product of two convex functions over a compact convex set in $\R^n$. The computational experiences are reported. The proposed algorithm is convergent. Article Download View An Outcome Space Algorithm for Minimizing the Product of Two Convex Functions over a Convex … Read more