The mesh adaptive direct search algorithm with treed Gaussian process surrogates

This work introduces the use of the treed Gaussian process (TGP) as a surrogate model within the mesh adaptive direct search (MADS) framework for constrained blackbox optimization. It extends the surrogate management framework (SMF) to nonsmooth optimization under general constraints. MADS uses TGP in two ways: one, as a surrogate for blackbox evaluations; and two, … Read more

Real-Time Optimization Strategies for Building Systems

We propose real-time optimization strategies for energy management in building systems. We have found that exploiting building-wide multivariable interactions between CO2 and humidity, pressure, occupancy, and temperature leads to significant reductions of energy intensity compared with traditional strategies. Our analysis indicates that it is possible to obtain energy savings of more than 50% compared with … Read more

On the set-semidefinite representation of nonconvex quadratic programs over arbitrary feasible sets

In the paper we prove that any nonconvex quadratic problem over some set $K\subset \mathbb{R}^n$ with additional linear and binary constraints can be rewritten as linear problem over the cone, dual to the cone of K-semidefinite matrices. We show that when K is defined by one quadratic constraint or by one concave quadratic constraint and … Read more

A Nonlinear Conjugate Gradient Algorithm with An Optimal Property and An Improved Wolfe Line Search

In this paper, we seek the conjugate gradient direction closest to the direction of the scaled memoryless BFGS method and propose a family of conjugate gradient methods for unconstrained optimization. An improved Wolfe line search is also proposed, which can avoid a numerical drawback of the Wolfe line search and guarantee the global convergence of … Read more

HOGWILD!: A Lock-Free Approach to Parallelizing Stochastic Gradient Descent

Stochastic Gradient Descent (SGD) is a popular algorithm that can achieve state-of-the-art performance on a variety of machine learning tasks. Several researchers have recently proposed schemes to parallelize SGD, but all require performance-destroying memory locking and synchronization. This work aims to show using novel theoretical analysis, algorithms, and implementation that SGD can be implemented *without … Read more

Hidden convexity in partially separable optimization

The paper identifies classes of nonconvex optimization problems whose convex relaxations have optimal solutions which at the same time are global optimal solutions of the original nonconvex problems. Such a hidden convexity property was so far limited to quadratically constrained quadratic problems with one or two constraints. We extend it here to problems with some … Read more

Hidden convexity in partially separable optimization

The paper identifies classes of nonconvex optimization problems whose convex relaxations have optimal solutions which at the same time are global optimal solutions of the original nonconvex problems. Such a hidden convexity property was so far limited to quadratically constrained quadratic problems with one or two constraints. We extend it here to problems with some … Read more

An Outcome Space Algorithm for Minimizing the Product of Two Convex Functions over a Convex Set

This paper presents an outcome-space outer approximation algorithm for solving the problem of minimizing the product of two convex functions over a compact convex set in $\R^n$. The computational experiences are reported. The proposed algorithm is convergent. ArticleDownload View PDF

Optimal Newton-type methods for nonconvex smooth optimization problems

We consider a general class of second-order iterations for unconstrained optimization that includes regularization and trust-region variants of Newton’s method. For each method in this class, we exhibit a smooth, bounded-below objective function, whose gradient is globally Lipschitz continuous within an open convex set containing any iterates encountered and whose Hessian is $\alpha-$Holder continuous (for … Read more

Inexact projected gradient method for vector optimization

In this work, we propose an inexact projected gradient-like method for solving smooth constrained vector optimization problems. In the unconstrained case, we retrieve the steepest descent method introduced by Graña Drummond and Svaiter. In the constrained setting, the method we present extends the exact one proposed by Graña Drummond and Iusem, since it admits relative … Read more