Group sparsity via linear-time projection

We present an efficient spectral projected-gradient algorithm for optimization subject to a group one-norm constraint. Our approach is based on a novel linear-time algorithm for Euclidean projection onto the one- and group one-norm constraints. Numerical experiments on large data sets suggest that the proposed method is substantially more efficient and scalable than existing methods. Citation … Read more

Iteration-complexity of first-order penalty methods

This paper considers a special but broad class of convex programing (CP) problems whose feasible region is a simple compact convex set intersected with the inverse image of a closed convex cone under an affine transformation. We study two first-order penalty methods for solving the above class of problems, namely: the quadratic penalty method and … Read more

Nonlinear optimization for matroid intersection and extensions

We address optimization of nonlinear functions of the form $f(Wx)$~, where $f:\R^d\rightarrow \R$ is a nonlinear function, $W$ is a $d\times n$ matrix, and feasible $x$ are in some large finite set $\calF$ of integer points in $\R^n$~. Generally, such problems are intractable, so we obtain positive algorithmic results by looking at broad natural classes … Read more

On fast integration to steady state and earlier times

The integration to steady state of many initial value ODEs and PDEs using the forward Euler method can alternatively be considered as gradient descent for an associated minimization problem. Greedy algorithms such as steepest descent for determining the step size are as slow to reach steady state as is forward Euler integration with the best … Read more

A stochastic algorithm for function minimization

Focusing on what an optimization problem may comply with, the so-called convergence conditions have been proposed and sequentially a stochastic optimization algorithm named as DSZ algorithm is presented in order to deal with both unconstrained and constrained optimizations. Its principle is discussed in the theoretical model of DSZ algorithm, from which we present a practical … Read more

Numerical Experience with a Recursive Trust-Region Method for Multilevel Nonlinear Optimization

We consider an implementation of the recursive multilevel trust-region algorithm proposed by Gratton, Mouffe, Toint, Weber (2008) for bound-constrained nonlinear problems, and provide numerical experience on multilevel test problems. A suitable choice of the algorithm’s parameters is identified on these problems, yielding a satisfactory compromise between reliability and efficiency. The resulting default algorithm is then … Read more

A SECOND DERIVATIVE SQP METHOD WITH IMPOSED DESCENT

Sequential quadratic programming (SQP) methods form a class of highly efficient algorithms for solving nonlinearly constrained optimization problems. Although second derivative information may often be calculated, there is little practical theory that justifies exact-Hessian SQP methods. In particular, the resulting quadratic programming (QP) subproblems are often nonconvex, and thus finding their global solutions may be … Read more

An FPTAS for Minimizing the Product of Two Non-negative Linear Cost Functions

We consider a quadratic programming (QP) problem ($\Pi$) of the form $\min x^T C x$ subject to $Ax \ge b$ where $C\in {\mathbb R}^{n\mbox{\tiny\texttimes} n}_+, rank(C)=1$ and $A\in {\mathbb R}^{m\mbox{\tiny\texttimes} n}, b\in {\mathbb R}^m$. We present an FPTAS for this problem by reformulating the QP ($\Pi$) as a parametrized LP and “rounding” the optimal solution. … Read more

Proximal Point Methods for Functions Involving Lojasiewicz, Quasiconvex and Convex Properties on Hadamard Manifolds

This paper extends the full convergence of the proximal point method with Riemannian, Semi-Bregman and Bregman distances to solve minimization problems on Hadamard manifolds. For the unconstrained problem, under the assumptions that the optimal set is nonempty and the objective function is continuous and either quasiconvex or satisfies a generalized Lojasiewicz property, we prove the … Read more

An LPCC Approach to Nonconvex Quadratic Programs

Filling a gap in nonconvex quadratic programming, this paper shows that the global resolution of a feasible quadratic program (QP), which is not known a priori to be bounded or unbounded below, can be accomplished in finite time by solving a linear program with linear complementarity constraints, i.e., an LPCC. Alternatively, this task can be … Read more