Regret Minimization in Stochastic Non-Convex Learning via a Proximal-Gradient Approach

Motivated by applications in machine learning and operations research, we study regret minimization with stochastic first-order oracle feedback in online constrained, and possibly non-smooth, non-convex problems. In this setting, the minimization of external regret is beyond reach, so we focus on a local regret measures defined via a proximal-gradient residual mapping. To achieve no (local) … Read more

On the Convergence to Stationary Points of Deterministic and Randomized Feasible Descent Directions Methods

This paper studies the class of nonsmooth nonconvex problems in which the difference between a continuously differentiable function and a convex nonsmooth function is minimized over linear constraints. Our goal is to attain a point satisfying the stationarity necessary optimality condition, defined as the lack of feasible descent directions. Although elementary in smooth optimization, this … Read more

Finding Second-Order Stationary Points in Constrained Minimization: A Feasible Direction Approach

This paper introduces a method for computing points satisfying the second-order necessary optimality conditions in constrained nonconvex minimization. The method comprises two independent steps corresponding to the first and second order conditions. The first-order step is a generic closed map algorithm which can be chosen from a variety of first-order algorithms, making it The second-order … Read more

On the Convergence to Stationary Points of Deterministic and Randomized Feasible Descent Directions Methods

We study the class of nonsmooth nonconvex problems in which the objective is to minimize the difference between a continuously differentiable function (possibly nonconvex) and a convex (possibly nonsmooth) function over a convex polytope. This general class contains many types of problems, including difference of convex functions (DC) problems, and as such, can be used … Read more

Proximal Mapping for Symmetric Penalty and Sparsity

This paper studies a class of problems consisting of minimizing a continuously differentiable function penalized with the so-called $\ell_0$-norm over a symmetric set. These problems are hard to solve, yet prominent in many fields and applications. We first study the proximal mapping with respect to the $\ell_0$-norm over symmetric sets, and provide an efficient method … Read more

On the Minimization Over Sparse Symmetric Sets: Projections, Optimality Conditions and Algorithms

We consider the problem of minimizing a general continuously differentiable function over symmetric sets under sparsity constraints. These type of problems are generally hard to solve as the sparsity constraint induces a combinatorial constraint into the problem, rendering the feasible set to be nonconvex. We begin with a study of the properties of the orthogonal … Read more