On enhanced KKT optimality conditions for smooth nonlinear optimization

The Fritz-John (FJ) and KKT conditions are fundamental tools for characterizing minimizers and form the basis of almost all methods for constrained optimization. Since the seminal works of Fritz John, Karush, Kuhn and Tucker, FJ/KKT conditions have been enhanced by adding extra necessary conditions. Such an extension was initially proposed by Hestenes in the 1970s … Read more

CDOpt: A Python Package for a Class of Riemannian Optimization

Optimization over the embedded submanifold defined by constraints $c(x) = 0$ has attracted much interest over the past few decades due to its wide applications in various areas, including computer vision, signal processing, numerical linear algebra, and deep learning. Plenty of related optimization packages have been developed based on Riemannian optimization approaches, which rely on … Read more

Optimization of the first Dirichlet Laplacian eigenvalue with respect to a union of balls

The problem of minimizing the first eigenvalue of the Dirichlet Laplacian with respect to a union of m balls with fixed identical radii and variable centers in the plane is investigated in the present work. The existence of a minimizer is shown and the shape sensitivity analysis of the eigenvalue with respect to the centers’ … Read more

Computing the Completely Positive Factorization via Alternating Minimization

In this article, we propose a novel alternating minimization scheme for finding completely positive factorizations. In each iteration, our method splits the original factorization problem into two optimization subproblems, the first one being a orthogonal procrustes problem, which is taken over the orthogoal group, and the second one over the set of entrywise positive matrices. … Read more

Expected Value of Matrix Quadratic Forms with Wishart distributed Random Matrices

To explore the limits of a stochastic gradient method, it may be useful to consider an example consisting of an infinite number of quadratic functions. In this context, it is appropriate to determine the expected value and the covariance matrix of the stochastic noise, i.e. the difference of the true gradient and the approximated gradient … Read more

A Voronoi-Based Mixed-Integer Gauss-Newton Algorithm for MINLP Arising in Optimal Control

We present a new algorithm for addressing nonconvex Mixed-Integer Nonlinear Programs (MINLPs) where the cost function is of nonlinear least squares form. We exploit this structure by leveraging a Gauss-Newton quadratic approximation of the original MINLP, leading to the formulation of a Mixed-Integer Quadratic Program (MIQP), which can be solved efficiently. The integer solution of the … Read more

Improvements for Decomposition Based Methods Utilized in the Development of Multi-Scale Energy Systems

The optimal design of large-scale energy systems can be found by posing the problem as an integrated multi-period planning and scheduling mathematical programming problem. Due to the complexity of the accompanying mathematical programming problem decomposition techniques are often required but they to are plagued with converge issues. To address these issues we have derived a … Read more

A Note on Semidefinite Representable Reformulations for Two Variants of the Trust-Region Subproblem

Motivated by encouraging numerical results in the literature, in this note we consider two specific variants of the trust-region subproblem and provide exact semidefinite representable reformulations. The first is over the intersection of two balls; the second is over the intersection of a ball and a special second-order conic representable set. Different from the technique … Read more

A Brief Introduction to Robust Bilevel Optimization

Bilevel optimization is a powerful tool for modeling hierarchical decision making processes. However, the resulting problems are challenging to solve – both in theory and practice. Fortunately, there have been significant algorithmic advances in the field so that we can solve much larger and also more complicated problems today compared to what was possible to … Read more

An improvement of the Goldstein line search

This paper introduces CLS, a new line search along an arbitrary smooth search path, that starts at the current iterate tangentially to a descent direction. Like the Goldstein line search and unlike the Wolfe line search, the new line search uses, beyond the gradient at the current iterate, only function values. Using this line search … Read more