ExtraPush for Convex Smooth Decentralized Optimization over Directed Networks

In this note, we extend the existing algorithms Extra and subgradient-push to a new algorithm ExtraPush for convex consensus optimization over a directed network. When the network is stationary, we propose a simplified algorithm called Normalized ExtraPush. These algorithms use a fixed step size like in Extra and accept the column-stochastic mixing matrices like in … Read more

Generalized Conjugate Gradient Methods for $\ell_1$ Regularized Convex Quadratic Programming with Finite Convergence

The conjugate gradient (CG) method is an efficient iterative method for solving large-scale strongly convex quadratic programming (QP). In this paper we propose some generalized CG (GCG) methods for solving the $\ell_1$-regularized (possibly not strongly) convex QP that terminate at an optimal solution in a finite number of iterations. At each iteration, our methods first … Read more

Sparse Recovery via Partial Regularization: Models, Theory and Algorithms

In the context of sparse recovery, it is known that most of existing regularizers such as $\ell_1$ suffer from some bias incurred by some leading entries (in magnitude) of the associated vector. To neutralize this bias, we propose a class of models with partial regularizers for recovering a sparse solution of a linear system. We … Read more

Improved worst-case evaluation complexity for potentially rank-deficient nonlinear least-Euclidean-norm problems using higher-order regularized models

We present an improved evaluation complexity bound for nonlinear least squares problems using higher order regularization methods. Citation Technical Report NA 15-17, Numerical Analysis Group, University of Oxford, 2015 Article Download View Improved worst-case evaluation complexity for potentially rank-deficient nonlinear least-Euclidean-norm problems using higher-order regularized models

Cubic-regularization counterpart of a variable-norm trust-region method for unconstrained minimization

In a recent paper we introduced a trust-region method with variable norms for unconstrained minimization and we proved standard asymptotic convergence results. Here we will show that, with a simple modification with respect to the sufficient descent condition and replacing the trust-region approach with a suitable cubic regularization, the complexity of this method for finding … Read more

Strict Constraint Qualifications and Sequential Optimality Conditions for Constrained Optimization

Sequential optimality conditions for constrained optimization are necessarily satisfied by local minimizers, independently of the fulfillment of constraint qualifications. These conditions support the employment of different stopping criteria for practical optimization algorithms. On the other hand, when an appropriate strict constraint qualification associated with some sequential optimality condition holds at a point that satisfies the … Read more

hBcnorm regularization algorithms for optimization over permutation matrices

Optimization problems over permutation matrices appear widely in facility layout, chip design, scheduling, pattern recognition, computer vision, graph matching, etc. Since this problem is NP-hard due to the combinatorial nature of permutation matrices, we relax the variable to be the more tractable doubly stochastic matrices and add an $L_p$-norm ($0 < p < 1$) regularization ... Read more

First and second order optimality conditions for piecewise smooth objective functions

Any piecewise smooth function that is specified by an evaluation procedures involving smooth elemental functions and piecewise linear functions like min and max can be represented in the so-called abs-normal form. By an extension of algorithmic, or automatic differentiation, one can then compute certain first and second order derivative vectors and matrices that represent a … Read more

A note on robust descent in differentiable optimization

In this note, we recall two solutions to alleviate the catastrophic cancellations that occur when comparing function values in descent algorithms. The automatic finite differencing approach (Dussault and Hamelin) was shown useful to trust region and line search variants. The main original contribution is to successfully adapt the line search strategy (Hager and Zhang) for … Read more

Relationships between constrained and unconstrained multi-objective optimization and application in location theory

This article deals with constrained multi-objective optimization problems. The main purpose of the article is to investigate relationships between constrained and unconstrained multi-objective optimization problems. Under suitable assumptions (e.g., generalized convexity assumptions) we derive a characterization of the set of (strictly, weakly) efficient solutions of a constrained multi-objective optimization problem using characterizations of the sets … Read more