A unified approach for inversion problems in intensity-modulated radiation therapy

We propose and study a unified model for handling dose constraints (physical dose, equivalent uniform dose (EUD), etc.) and radiation source constraints in a single mathematical framework based on the split feasibility problem. The model does not impose on the constraints an exogenous objective (merit) function. The optimization algorithm minimizes a weighted proximity function that … Read more

A Particle Swarm Pattern Search Method for Bound Constrained Nonlinear Optimization

In this paper we develop, analyze, and test a new algorithm for the global minimization of a function subject to simple bounds without the use of derivatives. The underlying algorithm is a pattern search method, more specifically a coordinate search method, which guarantees convergence to stationary points from arbitrary starting points. In the optional search … Read more

A local convergence property of primal-dual methods for nonlinear programming

We prove a new local convergence property of a primal-dual method for solving nonlinear optimization problem. Following a standard interior point approach, the complementarity conditions of the original primal-dual system are perturbed by a parameter which is driven to zero during the iterations. The sequence of iterates is generated by a linearization of the perturbed … Read more

Second-order convergence properties of trust-region methods using incomplete curvature information, with an application to multigrid optimization

Convergence properties of trust-region methods for unconstrained nonconvex optimization is considered in the case where information on the objective function’s local curvature is incomplete, in the sense that it may be restricted to a fixed set of “test directions” and may not be available at every iteration. It is shown that convergence to local “weak” … Read more

Continuous Optimization Methods for Structure Alignments

Structural Alignment is an important tool for fold identification of proteins, structural screening on ligand databases, pharmacophore identification and other applications. In the general case, the optimization problem of superimposing two structures is nonsmooth and nonconvex, so that most popular methods are heuristic and do not employ derivative information. Usually, these methods do not admit … Read more

The Rate of Convergence of the Augmented Lagrangian Method for Nonlinear Semidefinite Programming

We analyze the rate of local convergence of the augmented Lagrangian method for nonlinear semidefinite optimization. The presence of the positive semidefinite cone constraint requires extensive tools such as the singular value decomposition of matrices, an implicit function theorem for semismooth functions, and certain variational analysis on the projection operator in the symmetric-matrix space. Without … Read more

Kantorovich’s Majorants Principle for Newton’s Method

We prove Kantorovich’s theorem on Newton’s method using a convergence analysis which makes clear, with respect to Newton’s Method, the relationship of the majorant function and the non-linear operator under consideration. This approach enable us to drop out the assumption of existence of a second root for the majorant function, still guaranteeing Q-quadratic convergence rate … Read more

A Note on Sparse SOS and SDP Relaxations for Polynomial Optimization Problems over Symmetric Cones

This short note extends the sparse SOS (sum of squares) and SDP (semidefinite programming) relaxation proposed by Waki, Kim, Kojima and Muramatsu for normal POPs (polynomial optimization problems) to POPs over symmetric cones, and establishes its theoretical convergence based on the recent convergence result by Lasserre on the sparse SOS and SDP relaxation for normal … Read more

On the behavior of the conjugate-gradient method on ill-conditioned problems

We study the behavior of the conjugate-gradient method for solving a set of linear equations, where the matrix is symmetric and positive definite with one set of eigenvalues that are large and the remaining are small. We characterize the behavior of the residuals associated with the large eigenvalues throughout the iterations, and also characterize the … Read more

An interior Newton-like method for nonnegative least-squares problems with degenerate solution

An interior point approach for medium and large nonnegative linear least-squares problems is proposed. Global and locally quadratic convergence is shown even if a degenerate solution is approached. Viable approaches for implementation are discussed and numerical results are provided. CitationTechnical Report 1/2005, Dipartimento di Energetica ‘S. Stecco’, Universita di Firenze, ItaliaArticleDownload View PDF