Some Strongly Polynomially Solvable Convex Quadratic Programs with Bounded Variables

This paper begins with a class of convex quadratic programs (QPs) with bounded variables solvable by the parametric principal pivoting algorithm with $\mbox{O}(n^3)$ strongly polynomial complexity, where $n$ is the number of variables of the problem. Extension of the Hessian class is also discussed. Our research is motivated by a recent reference [7] wherein the … Read more

Convergence Analysis of Block Majorize-Minimize Subspace Approaches

Majorization-Minimization (MM) consists of a class of efficient and effective optimization algorithms that benefit from solid theoretical foundations. MM methods have shown their great ability to tackle efficiently challenging optimization problems from signal processing, image processing, inverse problems and machine learning. When processing large amount of data/variable, as it may happen in 3D image processing, … Read more

Training Structured Neural Networks Through Manifold Identification and Variance Reduction

This paper proposes an algorithm, RMDA, for training neural networks (NNs) with a regularization term for promoting desired structures. RMDA does not incur computation additional to proximal SGD with momentum, and achieves variance reduction without requiring the objective function to be of the finite-sum form. Through the tool of manifold identification from nonlinear optimization, we … Read more

Analysis non-sparse recovery for non-convex relaxed $\ell_q$ minimization

This paper studies construction of signals, which are sparse or nearly sparse with respect to a tight frame $D$ from underdetermined linear systems. In the paper, we propose a non-convex relaxed $\ell_q(0 ArticleDownload View PDF

Hierarchically constrained blackbox optimization

In blackbox optimization, evaluation of the objective and constraint functions is time consuming. In some situations, constraint values may be evaluated independently or sequentially. The present work proposes and compares two strategies to define a hierarchical ordering of the constraints and to interrupt the evaluation process at a trial point when it is detected that … Read more

Nonlinear conjugate gradient for smooth convex functions

The method of nonlinear conjugate gradients (NCG) is widely used in practice for unconstrained optimization, but it satisfies weak complexity bounds at best when applied to smooth convex functions. In contrast, Nesterov’s accelerated gradient (AG) method is optimal up to constant factors for this class. However, when specialized to quadratic function, conjugate gradient is optimal … Read more

Algebraic-based primal interior-point algorithms for stochastic infinity norm optimization

We study the two-stage stochastic infinity norm optimization problem with recourse. First, we study and analyze the algebraic structure of the infinity norm cone, and use its algebra to compute the derivatives of the barrier recourse functions. Then, we show that the barrier recourse functions and the composite barrier functions for this optimization problem are … Read more

Bolstering Stochastic Gradient Descent with Model Building

Stochastic gradient descent method and its variants constitute the core optimization algorithms that achieve good convergence rates for solving machine learning problems. These rates are obtained especially when these algorithms are fine-tuned for the application at hand. Although this tuning process can require large computational costs, recent work has shown that these costs can be … Read more

Model-Based Derivative-Free Methods for Convex-Constrained Optimization

We present a model-based derivative-free method for optimization subject to general convex constraints, which we assume are unrelaxable and accessed only through a projection operator that is cheap to evaluate. We prove global convergence and a worst-case complexity of $O(\epsilon^{-2})$ iterations and objective evaluations for nonconvex functions, matching results for the unconstrained case. We introduce … Read more

Simple odd beta-cycle inequalities for binary polynomial optimization

We consider the multilinear polytope which arises naturally in binary polynomial optimization. Del Pia and Di Gregorio introduced the class of odd beta-cycle inequalities valid for this polytope, showed that these generally have Chvátal rank 2 with respect to the standard relaxation and that, together with flower inequalities, they yield a perfect formulation for cycle … Read more