A dual Newton strategy for scenario decomposition in robust multi-stage MPC

This paper considers the solution of tree-structured Quadratic Programs (QPs) as they may arise in multi- stage Model Predictive Control (MPC). In this context, sampling the uncertainty on prescribed decision points gives rise to different scenarios that are linked to each other via the so-called non-anticipativity constraints. Previous work suggests to dualize these constraints and … Read more

Optimized Bonferroni Approximations of Distributionally Robust Joint Chance Constraints

A distributionally robust joint chance constraint involves a set of uncertain linear inequalities which can be violated up to a given probability threshold $\epsilon$, over a given family of probability distributions of the uncertain parameters. A conservative approximation of a joint chance constraint, often referred to as a Bonferroni approximation, uses the union bound to … Read more

Computing Weighted Analytic Center for Linear Matrix Inequalities Using Infeasible Newton’s Method

We study the problem of computing weighted analytic center for system of linear matrix inequality constraints. The problem can be solved using the Standard Newton’s method. However, this approach requires that a starting point in the interior point of the feasible region be given or a Phase I problem be solved. We address the problem … Read more

Convergence Study on the Proximal Alternating Direction Method with Larger Step Size

The alternating direction method of multipliers (ADMM) is a popular method for the separable convex programming with linear constraints, and the proximal ADMM is its important variant. Previous studies show that the relaxation factor $\gamma\in (0, \frac{1+\sqrt{5}}{2})$ by Fortin and Glowinski for the ADMM is also valid for the proximal ADMM. In this paper, we … Read more

On Nonconvex Decentralized Gradient Descent

Consensus optimization has received considerable attention in recent years. A number of decentralized algorithms have been proposed for {convex} consensus optimization. However, to the behaviors or consensus \emph{nonconvex} optimization, our understanding is more limited. When we lose convexity, we cannot hope our algorithms always return global solutions though they sometimes still do sometimes. Somewhat surprisingly, … Read more

A Parameterized Proximal Point Algorithm for Separable Convex Optimization

In this paper, we develop a Parameterized Proximal Point Algorithm (P-PPA) for solving a class of separable convex programming problems subject to linear and convex constraints. The proposed algorithm is provable to be globally convergent with a worst-case $O(1/t)$ convergence rate, where $t$ is the iteration number. By properly choosing the algorithm parameters, numerical experiments … Read more

Scenario grouping and decomposition algorithms for chance-constrained programs

A lower bound for a finite-scenario-based chance-constrained program is the quantile value corresponding to the sorted optimal objective values of scenario subproblems. This quantile bound can be improved by grouping subsets of scenarios at the expense of solving larger subproblems. The quality of the bound depends on how the scenarios are grouped. In this paper, … Read more

A MIQCP formulation for B-spline constraints

This paper presents a mixed-integer quadratically constrained programming (MIQCP) formulation for B-spline constraints. The formulation can be used to obtain an exact MIQCP reformulation of any spline-constrained optimization problem, provided that the polynomial spline functions are continuous. This reformulation allows practitioners to use a general-purpose MIQCP solver, instead of a special-purpose spline solver, when solving … Read more

A TVSCAD approach for image deblurring with impulsive noise

We consider image deblurring problem in the presence of impulsive noise. It is known that \emph{total variation} (TV) regularization with L1-norm penalized data fitting (TVL1 for short) works reasonably well only when the level of impulsive noise is relatively low. For high level impulsive noise, TVL1 works poorly. The reason is that all data, both … Read more

On High-order Model Regularization for Constrained Optimization

In two recent papers regularization methods based on Taylor polynomial models for minimization were proposed that only rely on H\”older conditions on the higher order employed derivatives. Grapiglia and Nesterov considered cubic regularization with a sufficient descent condition that uses the current gradient and resembles the classical Armijo’s criterion. Cartis, Gould, and Toint used Taylor … Read more