Doubly stochastic primal dual splitting algorithm with variance reduction for saddle point problems

The structured saddle-point problem involving the infimal convolution in real Hilbert spaces finds applicability in many applied mathematics disciplines. For this purpose, we develop a stochastic primal-dual splitting algorithm with loopless variance-reduction for solving this generic problem. We first prove the weak almost sure convergence of the iterates. We then demonstrate that our algorithm achieves … Read more

Budget-Constrained Maximization of “Cobb-Douglas with Linear Components” Utility Function

In what follows, we provide the demand analysis associated with budget-constrained linear utility maximization for each of several categories of goods, with the marginal rate of consumption expenditure-as a share of wealth- being a positive constant less than or equal to one. The marginal rate of consumption expenditure is endogenously determined, by a budget-constrained “Cobb-Douglas … Read more

DC programming approach for solving a class of bilevel partial facility interdiction problems

We propose a new approach based DC programming for fnding a solution of the partial facility interdiction problem that belongs to the class of bilevel programming. This model was frst considered in the work of Aksen et al. [1] with a heuristic algorithm named multi-start simplex search (MSS). However, because of the big number of … Read more

M-stationarity of Local Minimizers of MPCCs and Convergence of NCP-based Methods

This paper focuses on solving mathematical programs with complementarity constraints (MPCCs) without assuming MPCC-LICQ or lower level strict complementarity at a solution. We show that a local minimizer of an MPCC is “piecewise M-stationary” under MPCC-GCQ; furthermore, every weakly stationary point of an MPCC is B-stationary if MPCC-ACQ holds. For the Bounding Algorithm proposed in … Read more

Zeroth-order Riemannian Averaging Stochastic Approximation Algorithms

We present Zeroth-order Riemannian Averaging Stochastic Approximation (\texttt{Zo-RASA}) algorithms for stochastic optimization on Riemannian manifolds. We show that \texttt{Zo-RASA} achieves optimal sample complexities for generating $\epsilon$-approximation first-order stationary solutions using only one-sample or constant-order batches in each iteration. Our approach employs Riemannian moving-average stochastic gradient estimators, and a novel Riemannian-Lyapunov analysis technique for convergence analysis. … Read more

Goldstein Stationarity in Lipschitz Constrained Optimization

We prove the first convergence guarantees for a subgradient method minimizing a generic Lipschitz function over generic Lipschitz inequality constraints. No smoothness or convexity (or weak convexity) assumptions are made. Instead, we utilize a sequence of recent advances in Lipschitz unconstrained minimization, which showed convergence rates of $O(1/\delta\epsilon^3)$ towards reaching a “Goldstein” stationary point, that … Read more

Almost-sure convergence of iterates and multipliers in stochastic sequential quadratic optimization

Stochastic sequential quadratic optimization (SQP) methods for solving continuous optimization problems with nonlinear equality constraints have attracted attention recently, such as for solving large-scale data-fitting problems subject to nonconvex constraints. However, for a recently proposed subclass of such methods that is built on the popular stochastic-gradient methodology from the unconstrained setting, convergence guarantees have been … Read more

On the Computation of Restricted Normal Cones

Restricted normal cones are of interest, for instance, in the theory of local error bounds, where they have recently been used to characterize the exis- tence of a constrained Lipschitzian error bound. In this paper, we establish rela- tions between two concepts for restricted normals. The first of these concepts was introduced in the late … Read more

Constraint qualifications and strong global convergence properties of an augmented Lagrangian method on Riemannian manifolds

In the past years, augmented Lagrangian methods have been successfully applied to several classes of non-convex optimization problems, inspiring new developments in both theory and practice. In this paper we bring most of these recent developments from nonlinear programming to the context of optimization on Riemannian manifolds, including equality and inequality constraints. Many research have … Read more

Adaptive Importance Sampling Based Surrogation Methods for Bayesian Hierarchical Models, via Logarithmic Integral Optimization

We explore Maximum a Posteriori inference of Bayesian Hierarchical Models (BHMs) with intractable normalizers, which are increasingly prevalent in contemporary applications and pose computational challenges when combined with nonconvexity and nondifferentiability. To address these, we propose the Adaptive Importance Sampling-based Surrogation method, which efficiently handles nonconvexity and nondifferentiability while improving the sampling approximation of the … Read more