Stochastic Compositional Gradient Descent: Algorithms for Minimizing Compositions of Expected-Value Functions

Classical stochastic gradient methods are well suited for minimizing expected-value objective functions. However, they do not apply to the minimization of a nonlinear function involving expected values or a composition of two expected-value functions, i.e., problems of the form $\min_x \E_v\[f_v\big(\E_w [g_w(x)]\big) \]$. In order to solve this stochastic composition problem, we propose a class … Read more

Simple examples for the failure of Newton’s method with line search for strictly convex minimization

In this paper two simple examples of a twice continuously differentiable strictly convex function $f$ are presented for which Newton’s method with line search converges to a point where the gradient of $f$ is not zero. The first example uses a line search based on the Wolfe conditions. For the second example, some strictly convex … Read more

On the Global Optimality for Linear Constrained Rank Minimization Problem

The rank minimization with linear equality constraints has two closely related models, the low rank approximation model, that is to find the best rank-k approximation of a matrix satisfying the linear constraints, and its corresponding factorization model. The latter one is an unconstrained nonlinear least squares problem and hence enjoys a few fast first-order methods … Read more

Strong Inequalities for Chance-Constrained Program

As an essential substructure underlying a large class of chance-constrained programming problems with finite discrete distributions, the mixing set with $0-1$ knapsack has received considerable attentions in recent literature. In this study, we present a family of strong inequalities that subsume existing ones for this set. We also find many other inequalities that can be … Read more

Handling Nonpositive Curvature in a Limited Memory Steepest Descent Method

We propose a limited memory steepest descent (LMSD) method for solving unconstrained optimization problems. As a steepest descent method, the step computation in each iteration requires the evaluation of a gradient of the objective function and the calculation of a scalar step size only. When employed to solve certain convex problems, our method reduces to … Read more

On the Information-Adaptive Variants of the ADMM: an Iteration Complexity Perspective

Designing algorithms for an optimization model often amounts to maintaining a balance between the degree of information to request from the model on the one hand, and the computational speed to expect on the other hand. Naturally, the more information is available, the faster one can expect the algorithm to converge. The popular algorithm of … Read more

Second order analysis of state-constrained control-affine problems

In this article we establish new second order necessary and sufficient optimality conditions for a class of control-affine problems with a scalar control and a scalar state constraint. These optimality conditions extend to the constrained state framework the Goh transform, which is the classical tool for obtaining an extension of the Legendre condition. We propose … Read more

Linear conic optimization for inverse optimal control

We address the inverse problem of Lagrangian identification based on trajectories in the context of nonlinear optimal control. We propose a general formulation of the inverse problem based on occupation measures and complementarity in linear programming. The use of occupation measures in this context offers several advantages from the theoretical, numerical and statistical points of … Read more

A new bottom-up search method for determining all maximal efficient faces in multiple objective linear programming

Bottom-up search methods for determining the efficient set of a multiple objective linear programming (MOLP) problem have a valuable advantage that they can quickly give efficient subsets of the MOLP problem to the decision makers. Main difficulties of the previously appeared bottom-up search methods are finding all efficient extreme points adjacent to and enumerating all … Read more

Directional H”older metric subregularity and application to tangent cones

In this work, we study directional versions of the H\”olderian/Lipschitzian metric subregularity of multifunctions. Firstly, we establish variational characterizations of the H\”olderian/Lipschitzian directional metric subregularity by means of the strong slopes and next of mixed tangency-coderivative objects . By product, we give second-order conditions for the directional Lipschitzian metric subregularity and for the directional metric … Read more