A Probabilistic-Driven Search Algorithm for solving a Class of Optimization Problems

In this paper we introduce a new numerical optimization technique, a Probabilistic-Driven Search Algorithm. This algorithm has the following characteristics: 1) In each iteration of loop, the algorithm just changes the value of k variables to find a new solution better than the current one; 2) In each variable of the solution of the problem, … Read more

A Newton’s method for the continuous quadratic knapsack problem

We introduce a new efficient method to solve the continuous quadratic knapsack problem. This is a highly structured quadratic program that appears in different contexts. The method converges after O(n) iterations with overall arithmetic complexity O(n²). Numerical experiments show that in practice the method converges in a small number of iterations with overall linear complexity, … Read more

Computational aspects of risk-averse optimisation in two-stage stochastic models

In this paper we argue for aggregated models in decomposition schemes for two-stage stochastic programming problems. We observe that analogous schemes proved effective for single-stage risk-averse problems, and for general linear programming problems. A major drawback of the aggregated approach for two-stage problems is that an aggregated master problem can not contain all the information … Read more

On valid inequalities for quadratic programming with continuous variables and binary indicators

In this paper we study valid inequalities for a fundamental set that involves a continuous vector variable x in [0,1]^n, its associated quadratic form x x’ and its binary indicators. This structure appears when deriving strong relaxations for mixed integer quadratic programs (MIQPs). We treat valid inequalities for this set as lifted from QPB, which … Read more

On solving multistage stochastic programs with coherent risk measures

We consider a class of multistage stochastic linear programs in which at each stage a coherent risk measure of future costs is to be minimized. A general computational approach based on dynamic programming is derived that can be shown to converge to an optimal policy. By computing an inner approximation to future cost functions, we … Read more

An Efficient Augmented Lagrangian Method with Applications to Total Variation Minimization

Based on the classic augmented Lagrangian multiplier method, we propose, analyze and test an algorithm for solving a class of equality-constrained non-smooth optimization problems (chiefly but not necessarily convex programs) with a particular structure. The algorithm effectively combines an alternating direction technique with a nonmonotone line search to minimize the augmented Lagrangian function at each … Read more

Optimal Execution Under Jump Models For Uncertain Price Impact

In the execution cost problem, an investor wants to minimize the total expected cost and risk in the execution of a portfolio of risky assets to achieve desired positions. A major source of the execution cost comes from price impacts of both the investor’s own trades and other concurrent institutional trades. Indeed price impact of … Read more

Learning Circulant Sensing Kernels

In signal acquisition, Toeplitz and circulant matrices are widely used as sensing operators. They correspond to discrete convolutions and are easily or even naturally realized in various applications. For compressive sensing, recent work has used random Toeplitz and circulant sensing matrices and proved their efficiency in theory, by computer simulations, as well as through physical … Read more

A Block Coordinate Descent Method for Regularized Multi-Convex Optimization with Applications to Nonnegative Tensor Factorization and Completion

This paper considers regularized block multi-convex optimization, where the feasible set and objective function are generally non-convex but convex in each block of variables. We review some of its interesting examples and propose a generalized block coordinate descent method. (Using proximal updates, we further allow non-convexity over some blocks.) Under certain conditions, we show that … Read more

CHARACTERIZATIONS OF FULL STABILITY IN CONSTRAINED OPTIMIZATION

This paper is mainly devoted to the study of the so-called full Lipschitzian stability of local solutions to finite-dimensional parameterized problems of constrained optimization, which has been well recognized as a very important property from both viewpoints of optimization theory and its applications. Based on second- order generalized differential tools of variational analysis, we obtain … Read more