Adaptive Importance Sampling Based Surrogation Methods for Bayesian Hierarchical Models, via Logarithmic Integral Optimization

We explore Maximum a Posteriori inference of Bayesian Hierarchical Models (BHMs) with intractable normalizers, which are increasingly prevalent in contemporary applications and pose computational challenges when combined with nonconvexity and nondifferentiability. To address these, we propose the Adaptive Importance Sampling-based Surrogation method, which efficiently handles nonconvexity and nondifferentiability while improving the sampling approximation of the … Read more

Asynchronous Iterations in Optimization: New Sequence Results and Sharper Algorithmic Guarantees

We introduce novel convergence results for asynchronous iterations that appear in the analysis of parallel and distributed optimization algorithms. The results are simple to apply and give explicit estimates for how the degree of asynchrony impacts the convergence rates of the iterates. Our results shorten, streamline and strengthen existing convergence proofs for several asynchronous optimization … Read more

Learning in Inverse Optimization: Incenter Cost, Augmented Suboptimality Loss, and Algorithms

In Inverse Optimization (IO), an expert agent solves an optimization problem parametric in an exogenous signal. From a learning perspective, the goal is to learn the expert’s cost function given a dataset of signals and corresponding optimal actions. Motivated by the geometry of the IO set of consistent cost vectors, we introduce the “incenter” concept, … Read more

The complexity of first-order optimization methods from a metric perspective

A central tool for understanding first-order optimization algorithms is the Kurdyka-Lojasiewicz inequality. Standard approaches to such methods rely crucially on this inequality to leverage sufficient decrease conditions involving gradients or subgradients. However, the KL property fundamentally concerns not subgradients but rather “slope”, a purely metric notion. By highlighting this view, and avoiding any use of … Read more

First-Order Methods for Nonsmooth Nonconvex Functional Constrained Optimization with or without Slater Points

Constrained optimization problems where both the objective and constraints may be nonsmooth and nonconvex arise across many learning and data science settings. In this paper, we show a simple first-order method finds a feasible, ϵ-stationary point at a convergence rate of O(ϵ−4) without relying on compactness or Constraint Qualification (CQ). When CQ holds, this convergence is measured by … Read more

A New Inexact Proximal Linear Algorithm with Adaptive Stopping Criteria for Robust Phase Retrieval

This paper considers the robust phase retrieval problem, which can be cast as a nonsmooth and nonconvex optimization problem. We propose a new inexact proximal linear algorithm with the subproblem being solved inexactly. Our contributions are two adaptive stopping criteria for the subproblem. The convergence behavior of the proposed methods is analyzed. Through experiments on … Read more

A descent method for nonsmooth multiobjective optimization problems on Riemannian manifolds

In this paper, a descent method for nonsmooth multiobjective optimization problems on complete Riemannian manifolds is proposed. The objective functions are only assumed to be locally Lipschitz continuous instead of convexity used in existing methods. A necessary condition for Pareto optimality in Euclidean space is generalized to the Riemannian setting. At every iteration, an acceptable … Read more

The alternating simultaneous Halpern-Lions-Wittmann-Bauschke algorithm for finding the best approximation pair for two disjoint intersections of convex sets

Given two nonempty and disjoint intersections of closed and convex subsets, we look for a best approximation pair relative to them, i.e., a pair of points, one in each intersection, attaining the minimum distance between the disjoint intersections. We propose an iterative process based on projections onto the subsets which generate the intersections. The process … Read more

Numerical Methods for Convex Multistage Stochastic Optimization

Optimization problems involving sequential decisions in  a  stochastic environment    were studied  in  Stochastic Programming (SP), Stochastic Optimal Control  (SOC) and Markov Decision Processes (MDP). In this paper we mainly concentrate on SP and  SOC modelling   approaches. In these frameworks there are natural situations  when the considered problems are  convex. Classical approach to sequential optimization … Read more

On the B-differential of the componentwise minimum of two affine vector functions

This paper focuses on the description and computation of the B-differential of the componentwise minimum of two affine vector functions. This issue arises in the reformulation of the linear complementarity problem with the Min C-function. The question has many equivalent formulations and we identify some of them in linear algebra, convex analysis and discrete geometry. … Read more