Solving log-determinant optimization problems by a Newton-CG primal proximal point algorithm

We propose a Newton-CG primal proximal point algorithm for solving large scale log-determinant optimization problems. Our algorithm employs the essential ideas of the proximal point algorithm, the Newton method and the preconditioned conjugate gradient solver. When applying the Newton method to solve the inner sub-problem, we find that the log-determinant term plays the role of … Read more

The Legendre-Fenchel Conjugate of the Product of Two positive definite Quadratic Forms

It is well-known that the Legendre-Fenchel conjugate of a positive definite quadratic form can be explicitly expressed as another positive definite quadratic form, and that the conjugate of the sum of several positive definite quadratic forms can be expressed via inf-convolution. However, the Legendre-Fenchel conjugate of the product of two positive definite quadratic forms is … Read more

Necessary Optimality Conditions for two-stage Stochastic Programs with Equilibrium Constraints

Developing first order optimality conditions for a two-stage stochastic mathematical program with equilibrium constraints (SMPEC) whose second stage problem has multiple equilibria/solutions is a challenging undone work. In this paper we take this challenge by considering a general class of two-stage whose equilibrium constraints are represented by a parametric variational inequality (where the first stage … Read more

Sparse Signal Reconstruction via Iterative Support Detection

We present a novel sparse signal reconstruction method “ISD”, aiming to achieve fast reconstruction and a reduced requirement on the number of measurements compared to the classical l_1 minimization approach. ISD addresses failed reconstructions of l_1 minimization due to insufficient measurements. It estimates a support set I from a current reconstruction and obtains a new … Read more

Further Study on Strong Lagrangian Duality Property for Invex Programs via Penalty Functions

In this paper, we apply the quadratic penalization technique to derive strong Lagrangian duality property for an inequality constrained invex program. Our results extend and improve the corresponding results in the literature. Citation Bazara, M. S. and Shetty, C. M., Nonlinear Programming Theory and Algorithms, John Wiley \& Sons, New York, 1979. Ben-Israel, A. and … Read more

Alternating Direction Methods for Sparse Covariance Selection

The mathematical model of the widely-used sparse covariance selection problem (SCSP) is an NP-hard combinatorial problem, whereas it can be well approximately by a convex relaxation problem whose maximum likelihood estimation is penalized by the $L_1$ norm. This convex relaxation problem, however, is still numerically challenging, especially for large-scale cases. Recently, some efficient first-order methods … Read more

Stability of error bounds for semi-infinite convex constraint systems

In this paper, we are concerned with the stability of the error bounds for semi-infinite convex constraint systems. Roughly speaking, the error bound of a system of inequalities is said to be stable if all its “small” perturbations admit a (local or global) error bound. We first establish subdifferential characterizations of the stability of error … Read more

Estimate sequence methods: extensions and approximations

The approach of estimate sequence offers an interesting rereading of a number of accelerating schemes proposed by Nesterov. It seems to us that this framework is the most appropriate descriptive framework to develop an analysis of the sensitivity of the schemes to approximations. We develop in this work a simple, self-contained, and unified framework for … Read more

Alternating Direction Augmented Lagrangian Methods for semidefinite programming

We present an alternating direction method based on an augmented Lagrangian framework for solving semidefinite programming (SDP) problems in standard form. At each iteration, the algorithm, also known as a two-splitting scheme, minimizes the dual augmented Lagrangian function sequentially with respect to the Lagrange multipliers corresponding to the linear constraints, then the dual slack variables … Read more

SINCO – a greedy coordinate ascent method for sparse inverse covariance selection problem

In this paper, we consider the sparse inverse covariance selection problem which is equivalent to structure recovery of a Markov Network over Gaussian variables. We introduce a simple but efficient greedy algorithm, called {\em SINCO}, for solving the Sparse INverse COvariance problem. Our approach is based on coordinate ascent method which naturally preserves the sparsity … Read more