Incremental Accelerated Gradient Methods for SVM Classification: Study of the Constrained Approach

We investigate constrained first order techniques for training Support Vector Machines (SVM) for online classification tasks. The methods exploit the structure of the SVM training problem and combine ideas of incremental gradient technique, gradient acceleration and successive simple calculations of Lagrange multipliers. Both primal and dual formulations are studied and compared. Experiments show that the … Read more

An Inexact Sequential Quadratic Optimization Algorithm for Nonlinear Optimization

We propose a sequential quadratic optimization method for solving nonlinear optimization problems with equality and inequality constraints. The novel feature of the algorithm is that, during each iteration, the primal-dual search direction is allowed to be an inexact solution of a given quadratic optimization subproblem. We present a set of generic, loose conditions that the … Read more

On the evaluation complexity of constrained nonlinear least-squares and general constrained nonlinear optimization using second-order methods

When solving the general smooth nonlinear optimization problem involving equality and/or inequality constraints, an approximate first-order critical point of accuracy $\epsilon$ can be obtained by a second-order method using cubic regularization in at most $O(\epsilon^{-3/2})$ problem-functions evaluations, the same order bound as in the unconstrained case. This result is obtained by first showing that the … Read more

Trace-Penalty Minimization for Large-scale Eigenspace Computation

The Rayleigh-Ritz (RR) procedure, including orthogonalization, constitutes a major bottleneck in computing relatively high dimensional eigenspaces of large sparse matrices. Although operations involved in RR steps can be parallelized to a certain level, their parallel scalability, which is limited by some inherent sequential steps, is lower than dense matrix-matrix multiplications. The primary motivation of this … Read more

An interior point method with a primal-dual quadratic barrier penalty function for nonlinear semidefinite programming

In this paper, we consider an interior point method for nonlinear semidefinite programming. Yamashita, Yabe and Harada presented a primal-dual interior point method in which a nondifferentiable merit function was used. By using shifted barrier KKT conditions, we propose a differentiable primal-dual merit function within the framework of the line search strategy, and prove the … Read more

Abstract Newtonian Frameworks and Their Applications

We unify and extend some Newtonian iterative frameworks developed earlier in the literature, which results in a collection of convenient tools for local convergence analysis of various algorithms under various sets of assumptions including strong metric regularity, semistability, or upper-Lipschizt stability, the latter allowing for nonisolated solutions. These abstract schemes are further applied for deriving … Read more

Attraction of Newton method to critical Lagrange multipliers: fully quadratic case

All previously known results concerned with attraction of Newton-type iterations for optimality systems to critical Lagrange multipliers were a posteriori by nature: they were showing that in case of convergence, the dual limit is in a sense unlikely to be noncritical. This paper suggests the first a priori result in this direction, showing that critical … Read more

Hardness and Approximation Results for hBcBall Constrained Homogeneous Polynomial Optimization Problems

In this paper, we establish hardness and approximation results for various $L_p$-ball constrained homogeneous polynomial optimization problems, where $p \in [2,\infty]$. Specifically, we prove that for any given $d \ge 3$ and $p \in [2,\infty]$, both the problem of optimizing a degree-$d$ homogeneous polynomial over the $L_p$-ball and the problem of optimizing a degree-$d$ multilinear … Read more

A Reliable Affine Relaxation Method for Global Optimization

An automatic method for constructing linear relaxations of constrained global optimization problems is proposed. Such a construction is based on affine and interval arithmetics and uses operator overloading. These linear programs have exactly the same numbers of variables and of inequality constraints as the given problems. Each equality constraint is replaced by two inequalities. This … Read more

Reducing the Number of Function Evaluations in Mesh Adaptive Direct Search Algorithms

The mesh adaptive direct search (MADS) class of algorithms is designed for nonsmooth optimization, where the objective function and constraints are typically computed by launching a time-consuming computer simulation. Each iteration of a MADS algorithm attempts to improve the current best-known solution by launching the simulation at a finite number of trial points. Common implementations … Read more