Scalable Nonlinear Programming Via Exact Differentiable Penalty Functions and Trust-Region Newton Methods

We present an approach for nonlinear programming (NLP) based on the direct minimization of an exact di erentiable penalty function using trust-region Newton techniques. As opposed to existing algorithmic approaches to NLP, the approach provides all the features required for scalability: it can eciently detect and exploit directions of negative curvature, it is superlinearly convergent, and … Read more

On the Global and Linear Convergence of the Generalized Alternating Direction Method of Multipliers

The formulation min f(x)+g(y) subject to Ax+By=b, where f and g are extended-value convex functions, arises in many application areas such as signal processing, imaging and image processing, statistics, and machine learning either naturally or after variable splitting. In many common problems, one of the two objective functions is strictly convex and has Lipschitz continuous … Read more

Approximation of Matrix Rank Function and Its Application to Matrix Rank Minimization

Matrix rank minimization problem is widely applicable in many fields such as control, signal processing and system identification. However, the problem is in general NP-hard, and it is computationally hard to solve directly in practice. In this paper, we provide a new kind of approximation functions for the rank of matrix, and the corresponding approximation … Read more

On the Exhaustivity of Simplicial Partitioning

During the last 40 years, simplicial partitioning has shown itself to be highly useful, including in the field of Nonlinear Optimisation. In this article, we consider results on the exhaustivity of simplicial partitioning schemes. We consider conjectures on this exhaustivity which seem at first glance to be true (two of which have been stated as … Read more

A Fair, Sequential Multiple Objective Optimization Algorithm

In multi-objective optimization the objective is to reach a point which is Pareto ecient. However we usually encounter many such points and choosing a point amongst them possesses another problem. In many applications we are required to choose a point having a good spread over all objective functions which is a direct consequence of the … Read more

A Probabilistic-Driven Search Algorithm for solving a Class of Optimization Problems

In this paper we introduce a new numerical optimization technique, a Probabilistic-Driven Search Algorithm. This algorithm has the following characteristics: 1) In each iteration of loop, the algorithm just changes the value of k variables to find a new solution better than the current one; 2) In each variable of the solution of the problem, … Read more

A Newton’s method for the continuous quadratic knapsack problem

We introduce a new efficient method to solve the continuous quadratic knapsack problem. This is a highly structured quadratic program that appears in different contexts. The method converges after O(n) iterations with overall arithmetic complexity O(n²). Numerical experiments show that in practice the method converges in a small number of iterations with overall linear complexity, … Read more

On valid inequalities for quadratic programming with continuous variables and binary indicators

In this paper we study valid inequalities for a fundamental set that involves a continuous vector variable x in [0,1]^n, its associated quadratic form x x’ and its binary indicators. This structure appears when deriving strong relaxations for mixed integer quadratic programs (MIQPs). We treat valid inequalities for this set as lifted from QPB, which … Read more

Computational aspects of risk-averse optimisation in two-stage stochastic models

In this paper we argue for aggregated models in decomposition schemes for two-stage stochastic programming problems. We observe that analogous schemes proved effective for single-stage risk-averse problems, and for general linear programming problems. A major drawback of the aggregated approach for two-stage problems is that an aggregated master problem can not contain all the information … Read more

An Efficient Augmented Lagrangian Method with Applications to Total Variation Minimization

Based on the classic augmented Lagrangian multiplier method, we propose, analyze and test an algorithm for solving a class of equality-constrained non-smooth optimization problems (chiefly but not necessarily convex programs) with a particular structure. The algorithm effectively combines an alternating direction technique with a nonmonotone line search to minimize the augmented Lagrangian function at each … Read more