Sparse Approximations with Interior Point Methods

Large-scale optimization problems that seek sparse solutions have become ubiquitous. They are routinely solved with various specialized first-order methods. Although such methods are often fast, they usually struggle with not-so-well conditioned problems. In this paper, specialized variants of an interior point-proximal method of multipliers are proposed and analyzed for problems of this class. Computational experience … Read more

A General Framework for Optimal Control of Fractional Nonlinear Delay Systems by Wavelets

An iterative procedure to find the optimal solutions of general fractional nonlinear delay systems with quadraticperformance indices is introduced. The derivatives of state equations are understood in the Caputo sense. By presenting and applying a general framework, we use the Chebyshev wavelet method developed for fractional linear optimal control to convert fractional nonlinear optimal control … Read more

Beyond local optimality conditions: the case of maximizing a convex function

In this paper, we design an algorithm for maximizing a convex function over a convex feasible set. The algorithm consists of two phases: in phase 1 a feasible solution is obtained that is used as an initial starting point in phase 2. In the latter, a biconvex problem equivalent to the original problem is solved … Read more

On the Numerical Performance of Derivative-Free Optimization Methods Based on Finite-Difference Approximations

The goal of this paper is to investigate an approach for derivative-free optimization that has not received sufficient attention in the literature and is yet one of the simplest to implement and parallelize. It consists of computing gradients of a smoothed approximation of the objective function (and constraints), and employing them within established codes. These … Read more

An inexact successive quadratic approximation method for a class of difference-of-convex optimization problems

In this paper, we propose a new method for a class of difference-of-convex (DC) optimization problems, whose objective is the sum of a smooth function and a possibly non-prox-friendly DC function. The method sequentially solves subproblems constructed from a quadratic approximation of the smooth function and a linear majorization of the concave part of the … Read more

Direct-Search for a Class of Stochastic Min-Max Problems

Recent applications in machine learning have renewed the interest of the community in min-max optimization problems. While gradient-based optimization methods are widely used to solve such problems, there are however many scenarios where these techniques are not well-suited, or even not applicable when the gradient is not accessible. We investigate the use of direct-search methods … Read more

How do exponential size solutions arise in semidefinite programming?

Semidefinite programs (SDPs) are some of the most popular and broadly applicable optimization problems to emerge in the last thirty years. A curious pathology of SDPs, illustrated by a classical example of Khachiyan, is that their solutions may need exponential space to even write down. Exponential size solutions are the main obstacle to solve a … Read more

Scalable Subspace Methods for Derivative-Free Nonlinear Least-Squares Optimization

We introduce a general framework for large-scale model-based derivative-free optimization based on iterative minimization within random subspaces. We present a probabilistic worst-case complexity analysis for our method, where in particular we prove high-probability bounds on the number of iterations before a given optimality is achieved. This framework is specialized to nonlinear least-squares problems, with a … Read more

Cutting Plane Generation Through Sparse Principal Component Analysis

Quadratically-constrained quadratic programs (QCQPs) are optimization models whose remarkable expressiveness has made them a cornerstone of methodological research for nonconvex optimization problems. However, modern methods to solve a general QCQP fail to scale, encountering computational challenges even with just a few hundred variables. Specifically, a semidefinite programming (SDP) relaxation is typically employed, which provides strong … Read more

Implicit Regularization of Sub-Gradient Method in Robust Matrix Recovery: Don’t be Afraid of Outliers

It is well-known that simple short-sighted algorithms, such as gradient descent, generalize well in the over-parameterized learning tasks, due to their implicit regularization. However, it is unknown whether the implicit regularization of these algorithms can be extended to robust learning tasks, where a subset of samples may be grossly corrupted with noise. In this work, … Read more