Hybrid Stochastic Gradient Descent Algorithms forStochastic Nonconvex Optimization

We introduce a hybrid stochastic estimator to design stochastic gradient algorithms for solving stochastic optimization problems. Such a hybrid estimator is a convex combination of two existing biased and unbiased estimators and leads to some useful property on its variance. We limit our consideration to a hybrid SARAH-SGD for nonconvex expectation problems. However, our idea … Read more

Multiphase Mixed-Integer Nonlinear Optimal Control of Hybrid Electric Vehicles

This paper considers the problem of computing the non-causal minimum-fuel energy management strategy of a hybrid electric vehicle on a given driving cycle. Specifically, we address the multiphase mixed-integer nonlinear optimal control problem arising when optimal gear choice, torque split and engine on/off controls are sought in off-line evaluations. We propose an efficient model by … Read more

Using interior point solvers for optimizing progressive lens models with spherical coordinates

Designing progressive lenses is a complex problem that has been previously solved by formulating an optimization model based on Cartesian coordinates. In this work a new progressive lens model using spherical coordinates is presented, and interior point solvers are used to solve this new optimization model. Although this results in a highly nonlinear, nonconvex, continuous … Read more

Hybrid methods for nonlinear least squares problems

This contribution contains a description and analysis of effective methods for minimization of the nonlinear least squares function $F(x) = (1/2) f^T(x) f(x)$, where $x \in R^n$ and $f \in R^m$, together with extensive computational tests and comparisons of the introduced methods. All hybrid methods are described in detail and their global convergence is proved … Read more

Numerical solution of generalized minimax problems

This contribution contains the description and investigation of four numerical methods for solving generalized minimax problems, which consists in the minimization of functions which are compositions of special smooth convex functions with maxima of smooth functions (the most important problem of this type is the sum of maxima of smooth functions). Section~1 is introductory. In … Read more

A FISTA-type accelerated gradient algorithm for solving smooth nonconvex composite optimization problems

In this paper, we describe and establish iteration-complexity of two accelerated composite gradient (ACG) variants to solve a smooth nonconvex composite optimization problem whose objective function is the sum of a nonconvex differentiable function f with a Lipschitz continuous gradient and a simple nonsmooth closed convex function h. When f is convex, the first ACG … Read more

Solving Chance-Constrained Problems via a Smooth Sample-Based Nonlinear Approximation

We introduce a new method for solving nonlinear continuous optimization problems with chance constraints. Our method is based on a reformulation of the probabilistic constraint as a quantile function. The quantile function is approximated via a differentiable sample average approximation. We provide theoretical statistical guarantees of the approximation, and illustrate empirically that the reformulation can … Read more

Projections onto the canonical simplex with additional linear inequalities

We consider the distributionally robust optimization and show that computing the distributional worst-case is equivalent to computing the projection onto the canonical simplex with additional linear inequality. We consider several distance functions to measure the distance of distributions. We write the projections as optimization problems and show that they are equivalent to finding a zero … Read more

A Proximal Interior Point Algorithm with Applications to Image Processing

In this article, we introduce a new proximal interior point algorithm (PIPA). This algorithm is able to handle convex optimization problems involving various constraints where the objective function is the sum of a Lipschitz differentiable term and a possibly nonsmooth one. Each iteration of PIPA involves the minimization of a merit function evaluated for decaying … Read more

A Theoretical and Empirical Comparison of Gradient Approximations in Derivative-Free Optimization

In this paper, we analyze several methods for approximating gradients of noisy functions using only function values. These methods include finite differences, linear interpolation, Gaussian smoothing and smoothing on a unit sphere. The methods differ in the number of functions sampled, the choice of the sample points, and the way in which the gradient approximations … Read more