Stochastic Optimization Approach to Water Management in Cooling-Constrained Power Plants

We propose a stochastic optimization framework to perform water management in coolingconstrained power plants. The approach determines optimal set-points to maximize power output in the presence of uncertain weather conditions and water intake constraints. Weather uncertainty is quantified in the form of ensembles using the state-of-the-art numerical weather prediction model WRF. The framework enables us … Read more

A Low-Memory Approach For Best-State Estimation Of Hidden Markov Models With Model Error

We present a low-memory approach for the best-state estimate (data assimilation) of hidden Markov models where model error is considered. In particular, our findings apply for the 4D- Var framework. The novelty of our approach resides in the fact that the storage needed by our estimation framework, while including model error, is dramatically reduced from … Read more

On optimizing the sum of the Rayleigh quotient and the generalized Rayleigh quotient on the unit sphere

Given symmetric matrices $B,D\in R^{n\times n}$ and a symmetric positive definite matrix $W\in R^{n\times n},$ maximizing the sum of the Rayleigh quotient $x^T Dx$ and the generalized Rayleigh quotient $x^T Bx/x^TWx$ on the unit sphere not only is of mathematical interest in its own right, but also finds applications in practice. In this paper, we … Read more

On the evaluation complexity of cubic regularization methods for potentially rank-deficient nonlinear least-squares problems and its relevance to constrained nonlinear optimization

We propose a new termination criteria suitable for potentially singular, zero or non-zero residual, least-squares problems, with which cubic regularization variants take at most $\mathcal{O}(\epsilon^{-3/2})$ residual- and Jacobian-evaluations to drive either the Euclidean norm of the residual or its gradient below $\epsilon$; this is the best-known bound for potentially singular nonlinear least-squares problems. We then … Read more

On Relaxing the Mangasarian-Fromovitz Constraint Qualification

For the classical nonlinear program two new relaxations of the Mangasarian-Fromovitz constraint qualification are discussed and their relationship with some standard constraint qualifications is examined. In particular, we establish the equivalence of one of these constraint qualifications with the recently suggested by Andreani et al. Constant rank of the subspace component constraint qualification. As an … Read more

Global convergence and the Powell singular function

The Powell singular function was introduced 1962 by M.J.D. Powell as an unconstrained optimization problem. The function is also used as nonlinear least squares problem and system of nonlinear equations. The function is a classic test function included in collections of test problems in optimization as well as an example problem in text books. In … Read more

Solving trajectory optimization problems via nonlinear programming: the brachistochrone case study

This note discusses reformulations the brachistochrone problem suitable for solution via NLP. The availability of solvers and modeling languages such as AMPL makes it tempting to formulate discretized optimization problems and get solutions to the discretized versions of trajectory optimization problems. We use the famous brachistochrone problem to warn that the resulting solutions may be … Read more

Numerical Optimization of Eigenvalues of Hermitian Matrix Functions

The eigenvalues of a Hermitian matrix function that depends on one parameter analytically can be ordered so that each eigenvalue is an analytic function of the parameter. Ordering these analytic eigenvalues from the largest to the smallest yields continuous and piece-wise analytic functions. For multi-variate Hermitian matrix functions that depend on $d$ parameters analytically, the … Read more

Beneath the valley of the noncommutative arithmetic-geometric mean inequality: conjectures, case-studies, and consequences

Randomized algorithms that base iteration-level decisions on samples from some pool are ubiquitous in machine learning and optimization. Examples include stochastic gradient descent and randomized coordinate descent. This paper makes progress at theoretically evaluating the difference in performance between sampling with- and without-replacement in such algorithms. Focusing on least means squares optimization, we formulate a … Read more

Interior-Point Methods for Nonconvex Nonlinear Programming: Cubic Regularization

In this paper, we present a barrier method for solving nonlinear programming problems. It employs a Levenberg-Marquardt perturbation to the Karush-Kuhn-Tucker (KKT) matrix to handle indefinite Hessians and a line search to obtain sufficient descent at each iteration. We show that the Levenberg-Marquardt perturbation is equivalent to replacing the Newton step by a cubic regularization … Read more