A Note on Sparse SOS and SDP Relaxations for Polynomial Optimization Problems over Symmetric Cones

This short note extends the sparse SOS (sum of squares) and SDP (semidefinite programming) relaxation proposed by Waki, Kim, Kojima and Muramatsu for normal POPs (polynomial optimization problems) to POPs over symmetric cones, and establishes its theoretical convergence based on the recent convergence result by Lasserre on the sparse SOS and SDP relaxation for normal … Read more

On the behavior of the conjugate-gradient method on ill-conditioned problems

We study the behavior of the conjugate-gradient method for solving a set of linear equations, where the matrix is symmetric and positive definite with one set of eigenvalues that are large and the remaining are small. We characterize the behavior of the residuals associated with the large eigenvalues throughout the iterations, and also characterize the … Read more

An interior Newton-like method for nonnegative least-squares problems with degenerate solution

An interior point approach for medium and large nonnegative linear least-squares problems is proposed. Global and locally quadratic convergence is shown even if a degenerate solution is approached. Viable approaches for implementation are discussed and numerical results are provided. CitationTechnical Report 1/2005, Dipartimento di Energetica ‘S. Stecco’, Universita di Firenze, ItaliaArticleDownload View PDF

On Self-Regulated Swarms, Societal Memory, Speed and Dynamics

Wasps, bees, ants and termites all make effective use of their environment and resources by displaying collective “swarm” intelligence. Termite colonies – for instance – build nests with a complexity far beyond the comprehension of the individual termite, while ant colonies dynamically allocate labor to various vital tasks such as foraging or defense without any … Read more

An efficient conjugate direction method with orthogonalization for large-scale quadratic optimization problems

A new conjugate direction method is proposed, which is based on an orthogonalization procedure and does not make use of line searches for the conjugate vector set construction. This procedure prevents the set of conjugate vectors from degeneracy and eliminates high sensitivity to computation errors pertinent to methods of conjugate directions, resulting in an efficient … Read more

Societal Implicit Memory and his Speed on Tracking Extrema over Dynamic Environments using Self-Regulatory Swarms

In order to overcome difficult dynamic optimization and environment extrema tracking problems, we propose a Self-Regulated Swarm (SRS) algorithm which hybridizes the advantageous characteristics of Swarm Intelligence as the emergence of a societal environmental memory or cognitive map via collective pheromone laying in the landscape (properly balancing the exploration/exploitation nature of the search strategy), with … Read more

Trust-region interior-point method for large sparse l_1 optimization.

In this paper, we propose an interior-point method for large sparse l_1 optimization. After a short introduction, the complete algorithm is introduced and some implementation details are given. We prove that this algorithm is globally convergent under standard mild assumptions. Thus nonconvex problems can be solved successfully. The results of computational experiments given in this … Read more

Interior-Point Methods for Nonconvex Nonlinear Programming: Regularization and Warmstarts

In this paper, we investigate the use of an exact primal-dual penalty approach within the framework of an interior-point method for nonconvex nonlinear programming. This approach provides regularization and relaxation, which can aid in solving ill-behaved problems and in warmstarting the algorithm. We present details of our implementation within the LOQO algorithm and provide extensive … Read more

Primal interior-point method for large sparse minimax optimization.

In this paper, we propose an interior-point method for large sparse minimax optimization. After a short introduction, where various barrier terms are discussed, the complete algorithm is introduced and some implementation details are given. We prove that this algorithm is globally convergent under standard mild assumptions. Thus nonconvex problems can be solved successfully. The results … Read more

Sequential Subspace Optimization Method for Large-Scale Unconstrained Problems

We present the Sequential Subspace Optimization (SESOP) method for large scale smooth unconstrained problems. At each iteration we search for a minimum of the objective function over a subspace spanned by the current gradient and by directions of few previous steps. We also include into this subspace the direction from the starting point to the … Read more