Block Structured Quadratic Programming for the Direct Multiple Shooting Method for Optimal Control

In this contribution we address the efficient solution of optimal control problems of dynamic processes with many controls. Such problems arise, e.g., from the outer convexification of integer control decisions. We treat this optimal control problem class using the direct multiple shooting method to discretize the optimal control problem. The resulting nonlinear problems are solved … Read more

All roads lead to Newton: Feasible second-order methods for equality-constrained optimization

This paper considers the connection between the intrinsic Riemannian Newton method and other more classically inspired optimization algorithms for equality-constrained optimization problems. We consider the feasibly-projected sequential quadratic programming (FP-SQP) method and show that it yields the same update step as the Riemannian Newton, subject to a minor assumption on the choice of multiplier vector. … Read more

Local and superlinear convergence of a primal-dual interior point method for nonlinear semidefinite programming

In this paper, we consider a primal-dual interior point method for solving nonlinear semidefinite programming problems. We propose primal-dual interior point methods based on the unscaled and scaled Newton methods, which correspond to the AHO, HRVW/KSH/M and NT search directions in linear SDP problems. We analyze local behavior of our proposed methods and show their … Read more

Standard Bi-Quadratic Optimization Problems and Unconstrained Polynomial Reformulations

A so-called Standard Bi-Quadratic Optimization Problem (StBQP) consists in minimizing a bi-quadratic form over the Cartesian product of two simplices (so this is different from a Bi-Standard QP where a quadratic function is minimized over the same set). An application example arises in portfolio selection. In this paper we present a bi-quartic formulation of StBQP, … Read more

Quasi-Newton methods on Grassmannians and multilinear approximations of tensors

In this paper we proposed quasi-Newton and limited memory quasi-Newton methods for objective functions defined on Grassmannians or a product of Grassmannians. Specifically we defined BFGS and L-BFGS updates in local and global coordinates on Grassmannians or a product of these. We proved that, when local coordinates are used, our BFGS updates on Grassmannians share … Read more

On String-Averaging for Sparse Problems and On the Split Common Fixed Point Problem

We review the common fixed point problem for the class of directed operators. This class is important because many commonly used nonlinear operators in convex optimization belong to it. We present our recent definition of sparseness of a family of operators and discuss a string-averaging algorithmic scheme that favorably handles the common fixed points problem … Read more

Seminorm-induced oblique projections for sparse nonlinear convex feasibility problems

Simultaneous subgradient projection algorithms for the convex feasibility problem use subgradient calculations and converge sometimes even in the inconsistent case. We devise an algorithm that uses seminorm-induced oblique projections onto super half-spaces of the convex sets, which is advantageous when the subgradient-Jacobian is a sparse matrix at many iteration points of the algorithm. Using generalized … Read more

An Augmented Lagrangian Approach for Sparse Principal Component Analysis

Principal component analysis (PCA) is a widely used technique for data analysis and dimension reduction with numerous applications in science and engineering. However, the standard PCA suffers from the fact that the principal components (PCs) are usually linear combinations of all the original variables, and it is thus often difficult to interpret the PCs. To … Read more

Sample Average Approximation for Stochastic Dominance Constrained Programs

In this paper we study optimization problems with second-order stochastic dominance constraints. This class of problems has been receiving increasing attention in the literature as it allows for the modeling of optimization problems where a risk-averse decision maker wants to ensure that the solution produced by the model dominates certain benchmarks. Here we deal with … Read more

A Combined Class of Self-Scaling and Modified Quasi-Newton Methods

Techniques for obtaining safely positive definite Hessian approximations with self-scaling and modified quasi-Newton updates are combined to obtain `better’ curvature approximations in line search methods for unconstrained optimization. It is shown that this class of methods, like the BFGS method has global and superlinear convergence for convex functions. Numerical experiments with this class, using the … Read more