Error estimate for regularized optimal transport problems via Bregman divergence

Regularization by the Shannon entropy enables us to efficiently and approximately solve optimal transport problems on a finite set. This paper is concerned with regularized optimal transport problems via Bregman divergence. We introduce the required properties for Bregman divergences, provide a non-asymptotic error estimate for the regularized problem, and show that the error estimate becomes … Read more

Self-concordant Smoothing for Large-Scale Convex Composite Optimization

\(\) We introduce a notion of self-concordant smoothing for minimizing the sum of two convex functions, one of which is smooth and the other may be nonsmooth. The key highlight of our approach is in a natural property of the resulting problem’s structure which provides us with a variable-metric selection method and a step-length selection … Read more

Closing Duality Gaps of SDPs through Perturbation

\(\) Let \(({\bf P},{\bf D})\) be a primal-dual pair of SDPs with a nonzero finite duality gap. Under such circumstances, \({\bf P}\) and \({\bf D}\) are weakly feasible and if we perturb the problem data to recover strong feasibility, the (common) optimal value function \(v\) as a function of the perturbation is not well-defined at … Read more

On Generalization and Regularization via Wasserstein Distributionally Robust Optimization

Wasserstein distributionally robust optimization (DRO) has found success in operations research and machine learning applications as a powerful means to obtain solutions with favourable out-of-sample performances. Two compelling explanations for the success are the generalization bounds derived from Wasserstein DRO and the equivalency between Wasserstein DRO and the regularization scheme commonly applied in machine learning. … Read more

Tractable Robust Supervised Learning Models

At the heart of supervised learning is a minimization problem with an objective function that evaluates a set of training data over a loss function that penalizes poor fitting and a regularization function that penalizes over-fitting to the training data. More recently, data-driven robust optimization based learning models provide an intuitive robustness perspective of regularization. … Read more

Cardinality Minimization, Constraints, and Regularization: A Survey

We survey optimization problems that involve the cardinality of variable vectors in constraints or the objective function. We provide a unified viewpoint on the general problem classes and models, and give concrete examples from diverse application fields such as signal and image processing, portfolio selection, or machine learning. The paper discusses general-purpose modeling techniques and … Read more

New efficient approach in finding a zero of a maximal monotone operator

In the paper, we provide a new efficient approach to find a zero of a maximal monotone operator under very mild assumptions. Using a regularization technique and the proximal point algorithm, we can construct a sequence that converges strongly to a solution with at least linear convergence rate. Article Download View New efficient approach in … Read more

A Subspace Acceleration Method for Minimization Involving a Group Sparsity-Inducing Regularizer

We consider the problem of minimizing an objective function that is the sum of a convex function and a group sparsity-inducing regularizer. Problems that integrate such regularizers arise in modern machine learning applications, often for the purpose of obtaining models that are easier to interpret and that have higher predictive accuracy. We present a new … Read more

High-order Evaluation Complexity of a Stochastic Adaptive Regularization Algorithm for Nonconvex Optimization Using Inexact Function Evaluations and Randomly Perturbed Derivatives

A stochastic adaptive regularization algorithm allowing random noise in derivatives and inexact function values is proposed for computing strong approximate minimizers of any order for inexpensively constrained smooth optimization problems. For an objective function with Lipschitz continuous p-th derivative in a convex neighbourhood of the feasible set and given an arbitrary optimality order q, it … Read more

A Limiting Analysis on Regularization of Singular SDP and its Implication to Infeasible Interior-point Algorithms

We consider primal-dual pairs of semidefinite programs and assume that they are ill-posed, i.e., both primal and dual are either weakly feasible or weakly infeasible. Under such circumstances, strong duality may break down and the primal and dual might have a nonzero duality gap. Nevertheless, there are arbitrary small perturbations to the problem data which … Read more