A Different Perspective on the Stochastic Convex Feasibility Problem

We analyze a simple randomized subgradient method for approximating solutions to stochastic systems of convex functional constraints, the only input to the algorithm being the size of minibatches. By introducing a new notion of what is meant for a point to approximately solve the constraints, determining bounds on the expected number of iterations reduces to … Read more

Subgradient methods near active manifolds: saddle point avoidance, local convergence, and asymptotic normality

Nonsmooth optimization problems arising in practice, whether in signal processing, statistical estimation, or modern machine learning, tend to exhibit beneficial smooth substructure: their domains stratify into “active manifolds” of smooth variation, which common proximal algorithms “identify” in finite time. Identification then entails a transition to smooth dynamics, and permits the use of second-order information for … Read more

A New Insight on Augmented Lagrangian Method with Applications in Machine Learning

By exploiting double-penalty terms for the primal subproblem, we develop a novel relaxed augmented Lagrangian method for solving a family of convex optimization problems subject to equality or inequality constraints. This new method is then extended to solve a general multi-block separable convex optimization problem, and two related primal-dual hybrid gradient algorithms are also discussed. … Read more

Appointment Scheduling for Medical Diagnostic Centers considering Time-sensitive Pharmaceuticals: A Dynamic Robust Optimization Approach

This paper studies optimal criteria for the appointment scheduling of outpatients in a medical imaging center. The main goal of this study is to coordinate the assignments of radiopharmaceuticals and the scheduling of outpatients on imaging scanners. We study a special case of a molecular imaging center that offers services for various diagnostic procedures for … Read more

On Properties of Univariate Max Functions at Local Maximizers

More than three decades ago, Boyd and Balakrishnan established a regularity result for the two-norm of a transfer function at maximizers. Their result extends easily to the statement that the maximum eigenvalue of a univariate real analytic Hermitian matrix family is twice continuously differentiable, with Lipschitz second derivative, at all local maximizers, a property that … Read more

Accelerated Stochastic Peaceman-Rachford Method for Empirical Risk Minimization

This work is devoted to studying an Accelerated Stochastic Peaceman-Rachford Splitting Method (AS-PRSM) for solving a family of structural empirical risk minimization problems. The objective function to be optimized is the sum of a possibly nonsmooth convex function and a finite-sum of smooth convex component functions. The smooth subproblem in AS-PRSM is solved by a stochastic gradient method using variance reduction … Read more

Exact Convergence Rates of Alternating Projections for Nontransversal Intersections

We study the exact convergence rate of the alternating projection method for the nontransversal intersection of a semialgebraic set and a linear subspace. If the linear subspace is a line, the exact rates are expressed by multiplicities of the defining polynomials of the semialgebraic set, or related power series. Our methods are also applied to … Read more

The Sharpe predictor for fairness in machine learning

In machine learning (ML) applications, unfair predictions may discriminate against a minority group. Most existing approaches for fair machine learning (FML) treat fairness as a constraint or a penalization term in the optimization of a ML model, which does not lead to the discovery of the complete landscape of the trade-offs among learning accuracy and … Read more

An abstract convergence framework with application to inertial inexact forward-backward methods

In this paper we introduce a novel abstract descent scheme suited for the minimization of proper and lower semicontinuous functions. The proposed abstract scheme generalizes a set of properties that are crucial for the convergence of several first-order methods designed for nonsmooth nonconvex optimization problems. Such properties guarantee the convergence of the full sequence of … Read more

Full-low evaluation methods for derivative-free optimization

We propose a new class of rigorous methods for derivative-free optimization with the aim of delivering efficient and robust numerical performance for functions of all types, from smooth to non-smooth, and under different noise regimes. To this end, we have developed Full-Low Evaluation methods, organized around two main types of iterations. The first iteration type … Read more