Practicable Robust Stochastic Optimization under Divergence Measures

We seek to provide practicable approximations of the two-stage robust stochastic optimization (RSO) model when its ambiguity set is constructed with an f-divergence radius. These models are known to be numerically challenging to various degrees, depending on the choice of the f-divergence function. The numerical challenges are even more pronounced under mixed-integer rst-stage decisions. In … Read more

The Rate of Convergence of Augmented Lagrange Method for a Composite Optimization Problem

In this paper we analyze the rate of local convergence of the augmented Lagrange method for solving optimization problems with equality constraints and the objective function expressed as the sum of a convex function and a twice continuously differentiable function. The presence of the non-smoothness of the convex function in the objective requires extensive tools … Read more

On the Moreau-Yosida regularization of the vector k-norm related functions

In this paper, we conduct a thorough study on the first and second order properties of the Moreau-Yosida regularization of the vector $k$-norm function, the indicator function of its epigraph, and the indicator function of the vector $k$-norm ball. We start with settling the vector $k$-norm case via applying the existing breakpoint searching algorithms to … Read more