The Second Order Directional Derivative of Symmetric Matrix-valued Functions

This paper focuses on the study of the second-order directional derivative of a symmetric matrix-valued function of the form $F(X)=P\mbox{diag}[f(\lambda_1(X)),\cdots,f(\lambda_n(X))]P^T$. For this purpose, we first adopt a direct way to derive the formula for the second-order directional derivative of any eigenvalue of a matrix in Torki \cite{Tor01}; Second, we establish a formula for the (parabolic) … Read more

A Sparsity Preserving Stochastic Gradient Method for Composite Optimization

We propose new stochastic gradient algorithms for solving convex composite optimization problems. In each iteration, our algorithms utilize a stochastic oracle of the gradient of the smooth component in the objective function. Our algorithms are based on a stochastic version of the estimate sequence technique introduced by Nesterov (Introductory Lectures on Convex Optimization: A Basic … Read more

Optimal Sensitivity Based on IPOPT

We introduce a flexible, open source implementation that provides the optimal sensitivity of solutions of nonlinear programming (NLP) problems, and is adapted to a fast solver based on a barrier NLP method. The program, called sIPOPT evaluates the sensitivity of the KKT system with respect to model parameters. It is paired with the open-source IPOPT … Read more

Explicit Solutions for Root Optimization of a Polynomial Family with One Affine Constraint

Given a family of real or complex monic polynomials of fixed degree with one affine constraint on their coefficients, consider the problem of minimizing the root radius (largest modulus of the roots) or root abscissa (largest real part of the roots). We give constructive methods for efficiently computing the globally optimal value as well as … Read more

Group Sparse Optimization by Alternating Direction Method

This paper proposes efficient algorithms for group sparse optimization with mixed L21-regularization, which arises from the reconstruction of group sparse signals in compressive sensing, and the group Lasso problem in statistics and machine learning. It is known that encoding the group information in addition to sparsity will lead to better signal recovery/feature selection. The L21-regularization … Read more

Exact Low-rank Matrix Recovery via Nonconvex Mp-Minimization

The low-rank matrix recovery (LMR) arises in many fields such as signal and image processing, statistics, computer vision, system identification and control, and it is NP-hard. It is known that under some restricted isometry property (RIP) conditions we can obtain the exact low-rank matrix solution by solving its convex relaxation, the nuclear norm minimization. In … Read more

FAST FIRST-ORDER METHODS FOR COMPOSITE CONVEX OPTIMIZATION WITH BACKTRACKING

We propose new versions of accelerated first order methods for convex composite optimization, where the prox parameter is allowed to increase from one iteration to the next. In particular we show that a full backtracking strategy can be used within the FISTA \cite{Beck-Teboulle-2009} and FALM algorithms \cite{Goldfarb-Ma-Scheinberg-2010} while preserving their worst-case iteration complexities of $O(\sqrt{L(f)/\epsilon})$. … Read more

An Implementation of an Algorithm for Nonlinear Programming Based on Piecewise Linear Models

This is a progress report on an implementation of the active-set method for nonlinear programming proposed in [6] that employs piecewise linear models in the active-set prediction phase. The motivation for this work is to develop an algorithm that is capable of solving large-scale problems, including those with a large reduced space. Unlike SQP methods, … Read more

Design and Verify: A New Scheme for Generating Cutting-Planes

A cutting-plane procedure for integer programming (IP) problems usually involves invoking a black-box procedure (such as the Gomory-Chvatal (GC) procedure) to compute a cutting-plane. In this paper, we describe an alternative paradigm of using the same cutting-plane black-box. This involves two steps. In the first step, we design an inequality cx = d + 1\} … Read more

How bad is a gradient algorithm for linear programming?

In their 1972 paper ‘How good is the simplex algorithm ?’ Klee and Minty present a class of problems the simplex algorithm for linear programming (LP) is not able to solve in a polynomial way. Later developments have resulted in algorithms by Khachiyan and Karmarkar that do solve LP in a polynomial way, although the … Read more