Adaptive Sampling Quasi-Newton Methods for Zeroth-Order Stochastic Optimization

We consider unconstrained stochastic optimization problems with no available gradient information. Such problems arise in settings from derivative-free simulation optimization to reinforcement learning. We propose an adaptive sampling quasi-Newton method where we estimate the gradients of a stochastic function using finite differences within a common random number framework. We develop modified versions of a norm … Read more

A Local MM Subspace Method for Solving Constrained Variational Problems in Image Recovery

This article introduces a new Penalized Majorization-Minimization Subspace algorithm (P-MMS) for solving smooth, constrained optimization problems. In short, our approach consists of embedding a subspace algorithm in an inexact exterior penalty procedure. The subspace strategy, combined with a Majoration-Minimization step-size search, takes great advantage of the smoothness of the penalized cost function, while the penalty … Read more

A simple Introduction to higher order liftings for binary problems

\(\) A short, simple, and self-contained proof is presented showing that $n$-th lifting for the max-cut-polytope is exact. The proof re-derives the known observations that the max-cut-polytope is the projection of a higher-dimensional regular simplex and that this simplex coincides with the $n$-th semidefinite lifting. An extension to reduce the dimension of higher order liftings … Read more

SABRINA: A Stochastic Subspace Majorization-Minimization Algorithm

A wide class of problems involves the minimization of a coercive and differentiable function $F$ on $\mathbb{R}^N$ whose gradient cannot be evaluated in an exact manner. In such context, many existing convergence results from standard gradient-based optimization literature cannot be directly applied and robustness to errors in the gradient is not necessarily guaranteed. This work … Read more

A Derivation of Nesterov’s Accelerated Gradient Algorithm from Optimal Control Theory

Nesterov’s accelerated gradient algorithm is derived from first principles. The first principles are founded on the recently-developed optimal control theory for optimization. The necessary conditions for optimal control generate a controllable dynamical system for accelerated optimization. Stabilizing this system via a control Lyapunov function generates an ordinary differential equation. An Euler discretization of the differential … Read more

A Different Perspective on the Stochastic Convex Feasibility Problem

We analyze a simple randomized subgradient method for approximating solutions to stochastic systems of convex functional constraints, the only input to the algorithm being the size of minibatches. By introducing a new notion of what is meant for a point to approximately solve the constraints, determining bounds on the expected number of iterations reduces to … Read more

Subgradient methods near active manifolds: saddle point avoidance, local convergence, and asymptotic normality

Nonsmooth optimization problems arising in practice, whether in signal processing, statistical estimation, or modern machine learning, tend to exhibit beneficial smooth substructure: their domains stratify into “active manifolds” of smooth variation, which common proximal algorithms “identify” in finite time. Identification then entails a transition to smooth dynamics, and permits the use of second-order information for … Read more

A New Insight on Augmented Lagrangian Method with Applications in Machine Learning

By exploiting double-penalty terms for the primal subproblem, we develop a novel relaxed augmented Lagrangian method for solving a family of convex optimization problems subject to equality or inequality constraints. This new method is then extended to solve a general multi-block separable convex optimization problem, and two related primal-dual hybrid gradient algorithms are also discussed. … Read more

Appointment Scheduling for Medical Diagnostic Centers considering Time-sensitive Pharmaceuticals: A Dynamic Robust Optimization Approach

This paper studies optimal criteria for the appointment scheduling of outpatients in a medical imaging center. The main goal of this study is to coordinate the assignments of radiopharmaceuticals and the scheduling of outpatients on imaging scanners. We study a special case of a molecular imaging center that offers services for various diagnostic procedures for … Read more

On Properties of Univariate Max Functions at Local Maximizers

More than three decades ago, Boyd and Balakrishnan established a regularity result for the two-norm of a transfer function at maximizers. Their result extends easily to the statement that the maximum eigenvalue of a univariate real analytic Hermitian matrix family is twice continuously differentiable, with Lipschitz second derivative, at all local maximizers, a property that … Read more