An Infeasible Active Set Method with Combinatorial Line Search for Convex Quadratic Problems with Bound Constraints

The minimization of a convex quadratic function under bound constraints is a fundamental building block for more complicated optimization problems. The active-set method introduced by [M. Bergounioux, K. Ito, and K. Kunisch. Primal-Dual Strategy for Constrained Optimal Control Problems. SIAM Journal on Control and Optimization, 37:1176–1194, 1999.] and [M. Bergounioux, M. Haddou, M. Hintermüller, and … Read more

A SMART Stochastic Algorithm for Nonconvex Optimization with Applications to Robust Machine Learning

Machine learning theory typically assumes that training data is unbiased and not adversarially generated. When real training data deviates from these assumptions, trained models make erroneous predictions, sometimes with disastrous effects. Robust losses, such as the huber norm are designed to mitigate the effects of such contaminated data, but they are limited to the regression … Read more

Accelerated gradient sliding for structured convex optimization

Our main goal in this paper is to show that one can skip gradient computations for gradient descent type methods applied to certain structured convex programming (CP) problems. To this end, we first present an accelerated gradient sliding (AGS) method for minimizing the summation of two smooth convex functions with different Lipschitz constants. We show … Read more

Regularized nonlinear acceleration

We describe a convergence acceleration technique for generic optimization problems. Our scheme computes estimates of the optimum from a nonlinear average of the iterates produced by any optimization method. The weights in this average are computed via a simple linear system, whose solution can be updated online. This acceleration scheme runs in parallel to the … Read more

Low-complexity method for hybrid MPC with local guarantees

Model predictive control problems for constrained hybrid systems are usually cast as mixed-integer optimization problems (MIP). However, commercial MIP solvers are designed to run on desktop computing platforms and are not suited for embedded applications which are typically restricted by limited computational power and memory. To alleviate these restrictions, we develop a novel low-complexity, iterative … Read more

Convergence rates of moment-sum-of-squares hierarchies for optimal control problems

We study the convergence rate of moment-sum-of-squares hierarchies of semidefinite programs for optimal control problems with polynomial data. It is known that these hierarchies generate polynomial under-approximations to the value function of the optimal control problem and that these under-approximations converge in the $L^1$ norm to the value function as their degree $d$ tends to … Read more

A Riemannian conjugate gradient method for optimization on the Stiefel manifold

In this paper we propose a new Riemannian conjugate gradient method for optimization on the Stiefel manifold. We introduce two novel vector transports associated with the retraction constructed by the Cayley transform. Both of them satisfy the Ring-Wirth nonexpansive condition, which is fundamental for convergence analysis of Riemannian conjugate gradient methods, and one of them … Read more

A Study of the Difference-of-Convex Approach for Solving Linear Programs with Complementarity Constraints

This paper studies the difference-of-convex (DC) penalty formulations and the associated difference-of-convex algorithm (DCA) for computing stationary solutions of linear programs with complementarity constraints (LPCCs). We focus on three such formulations and establish connections between their stationary solutions and those of the LPCC. Improvements of the DCA are proposed to remedy some drawbacks in a … Read more

A Sequential Algorithm for Solving Nonlinear Optimization Problems with Chance Constraints

An algorithm is presented for solving nonlinear optimization problems with chance constraints, i.e., those in which a constraint involving an uncertain parameter must be satisfied with at least a minimum probability. In particular, the algorithm is designed to solve cardinality-constrained nonlinear optimization problems that arise in sample average approximations of chance-constrained problems, as well as … Read more

New analysis of linear convergence of gradient-type methods via unifying error bound conditions

The subject of linear convergence of gradient-type methods on non-strongly convex optimization has been widely studied by introducing several notions as sufficient conditions. Influential examples include the error bound property, the restricted strongly convex property, the quadratic growth property, and the Kurdyka-{\L}ojasiewicz property. In this paper, we first define a group of error bound conditions … Read more