A framework for simultaneous aerodynamic design optimization in the presence of chaos

Integrating existing solvers for unsteady partial differential equations (PDEs) into a simultaneous optimization method is challenging due to the forward- in-time information propagation of classical time-stepping methods. This paper applies the simultaneous single-step one-shot optimization method to a reformulated unsteady PDE constraint that allows for both forward- and backward-in-time information propagation. Especially in the presence … Read more

Backward Step Control for Global Newton-type Methods

We present and analyze a new damping approach called backward step control for the globalization of the convergence of Newton-type methods for the numerical solution of nonlinear root-finding problems. We provide and discuss reasonable assumptions that imply convergence of backward step control on the basis of generalized Newton paths in conjunction with a backward analysis … Read more

Global Convergence of ADMM in Nonconvex Nonsmooth Optimization

In this paper, we analyze the convergence of the alternating direction method of multipliers (ADMM) for minimizing a nonconvex and possibly nonsmooth objective function, $\phi(x_1,\ldots,x_p,y)$, subject to linear equality constraints that couple $x_1,\ldots,x_p,y$, where $p\ge 1$ is an integer. Our ADMM sequentially updates the primal variables in the order $x_1,\ldots,x_p,y$, followed by updating the dual … Read more

ExtraPush for Convex Smooth Decentralized Optimization over Directed Networks

In this note, we extend the existing algorithms Extra and subgradient-push to a new algorithm ExtraPush for convex consensus optimization over a directed network. When the network is stationary, we propose a simplified algorithm called Normalized ExtraPush. These algorithms use a fixed step size like in Extra and accept the column-stochastic mixing matrices like in … Read more

Generalized Conjugate Gradient Methods for $\ell_1$ Regularized Convex Quadratic Programming with Finite Convergence

The conjugate gradient (CG) method is an efficient iterative method for solving large-scale strongly convex quadratic programming (QP). In this paper we propose some generalized CG (GCG) methods for solving the $\ell_1$-regularized (possibly not strongly) convex QP that terminate at an optimal solution in a finite number of iterations. At each iteration, our methods first … Read more

Sparse Recovery via Partial Regularization: Models, Theory and Algorithms

In the context of sparse recovery, it is known that most of existing regularizers such as $\ell_1$ suffer from some bias incurred by some leading entries (in magnitude) of the associated vector. To neutralize this bias, we propose a class of models with partial regularizers for recovering a sparse solution of a linear system. We … Read more

Improved worst-case evaluation complexity for potentially rank-deficient nonlinear least-Euclidean-norm problems using higher-order regularized models

We present an improved evaluation complexity bound for nonlinear least squares problems using higher order regularization methods. CitationTechnical Report NA 15-17, Numerical Analysis Group, University of Oxford, 2015ArticleDownload View PDF

Cubic-regularization counterpart of a variable-norm trust-region method for unconstrained minimization

In a recent paper we introduced a trust-region method with variable norms for unconstrained minimization and we proved standard asymptotic convergence results. Here we will show that, with a simple modification with respect to the sufficient descent condition and replacing the trust-region approach with a suitable cubic regularization, the complexity of this method for finding … Read more

Strict Constraint Qualifications and Sequential Optimality Conditions for Constrained Optimization

Sequential optimality conditions for constrained optimization are necessarily satisfied by local minimizers, independently of the fulfillment of constraint qualifications. These conditions support the employment of different stopping criteria for practical optimization algorithms. On the other hand, when an appropriate strict constraint qualification associated with some sequential optimality condition holds at a point that satisfies the … Read more

hBcnorm regularization algorithms for optimization over permutation matrices

Optimization problems over permutation matrices appear widely in facility layout, chip design, scheduling, pattern recognition, computer vision, graph matching, etc. Since this problem is NP-hard due to the combinatorial nature of permutation matrices, we relax the variable to be the more tractable doubly stochastic matrices and add an $L_p$-norm ($0 < p < 1$) regularization ... Read more