Analysis non-sparse recovery for non-convex relaxed $\ell_q$ minimization

This paper studies construction of signals, which are sparse or nearly sparse with respect to a tight frame $D$ from underdetermined linear systems. In the paper, we propose a non-convex relaxed $\ell_q(0 Article Download View Analysis non-sparse recovery for non-convex relaxed $ell_q$ minimization

Hierarchically constrained blackbox optimization

In blackbox optimization, evaluation of the objective and constraint functions is time consuming. In some situations, constraint values may be evaluated independently or sequentially. The present work proposes and compares two strategies to define a hierarchical ordering of the constraints and to interrupt the evaluation process at a trial point when it is detected that … Read more

Nonlinear conjugate gradient for smooth convex functions

The method of nonlinear conjugate gradients (NCG) is widely used in practice for unconstrained optimization, but it satisfies weak complexity bounds at best when applied to smooth convex functions. In contrast, Nesterov’s accelerated gradient (AG) method is optimal up to constant factors for this class. However, when specialized to quadratic function, conjugate gradient is optimal … Read more

Algebraic-based primal interior-point algorithms for stochastic infinity norm optimization

We study the two-stage stochastic infinity norm optimization problem with recourse. First, we study and analyze the algebraic structure of the infinity norm cone, and use its algebra to compute the derivatives of the barrier recourse functions. Then, we show that the barrier recourse functions and the composite barrier functions for this optimization problem are … Read more

Bolstering Stochastic Gradient Descent with Model Building

Stochastic gradient descent method and its variants constitute the core optimization algorithms that achieve good convergence rates for solving machine learning problems. These rates are obtained especially when these algorithms are fine-tuned for the application at hand. Although this tuning process can require large computational costs, recent work has shown that these costs can be … Read more

Model-Based Derivative-Free Methods for Convex-Constrained Optimization

We present a model-based derivative-free method for optimization subject to general convex constraints, which we assume are unrelaxable and accessed only through a projection operator that is cheap to evaluate. We prove global convergence and a worst-case complexity of $O(\epsilon^{-2})$ iterations and objective evaluations for nonconvex functions, matching results for the unconstrained case. We introduce … Read more

Simple odd beta-cycle inequalities for binary polynomial optimization

We consider the multilinear polytope which arises naturally in binary polynomial optimization. Del Pia and Di Gregorio introduced the class of odd beta-cycle inequalities valid for this polytope, showed that these generally have Chvátal rank 2 with respect to the standard relaxation and that, together with flower inequalities, they yield a perfect formulation for cycle … Read more

On Componental Operators in Hilbert Space

We consider a Hilbert space that is a product of a finite number of Hilbert spaces and operators that are represented by “componental operators” acting on the Hilbert spaces that form the product space. We attribute operatorial properties to the componental operators rather than to the full operators. The operatorial properties that we discuss include … Read more

Mirror-prox sliding methods for solving a class of monotone variational inequalities

In this paper we propose new algorithms for solving a class of structured monotone variational inequality (VI) problems over compact feasible sets. By identifying the gradient components existing in the operator of VI, we show that it is possible to skip computations of the gradients from time to time, while still maintaining the optimal iteration … Read more

Quadratic Regularization Methods with Finite-Difference Gradient Approximations

This paper presents two quadratic regularization methods with finite-difference gradient approximations for smooth unconstrained optimization problems. One method is based on forward finite-difference gradients, while the other is based on central finite-difference gradients. In both methods, the accuracy of the gradient approximations and the regularization parameter in the quadratic models are jointly adjusted using a … Read more