On the convergence of the modified Levenberg-Marquardt method with a nonmonotone second order Armijo type line search

Recently, Fan [4, Math. Comput., 81 (2012), pp. 447-466] proposed a modified Levenberg-Marquardt (MLM) method for nonlinear equations. Using a trust region technique, global and cubic convergence of the MLM method is proved [4] under the local error bound condition, which is weaker than nonsingularity. The purpose of the paper is to investigate the convergence … Read more

NUMERICAL OPTIMIZATION METHODS FOR BLIND DECONVOLUTION

This paper describes a nonlinear least squares framework to solve a separable nonlinear ill-posed inverse problems that arises in blind deconvolution. It is shown that with proper constraints and well chosen regularization parameters, it is possible to obtain an objective function that is fairly well behaved and the nonlinear minimization problem can be effectively solved … Read more

On spectral properties of steepest descent methods

In recent years it has been made more and more clear that the critical issue in gradient methods is the choice of the step length, whereas using the gradient as search direction may lead to very effective algorithms, whose surprising behaviour has been only partially explained, mostly in terms of the spectrum of the Hessian … Read more

CONJUGATE GRADIENT WITH SUBSPACE OPTIMIZATION

In this paper we present a variant of the conjugate gradient (CG) algorithm in which we invoke a subspace minimization subproblem on each iteration. We call this algorithm CGSO for “conjugate gradient with subspace optimization”. It is related to earlier work by Nemirovsky and Yudin. We apply the algorithm to solve unconstrained strictly convex problems. … Read more

Smoothing SQP Algorithm for Non-Lipschitz Optimization with Complexity Analysis

In this paper, we propose a smoothing sequential quadratic programming (SSQP) algorithm for solving a class of nonsmooth nonconvex, perhaps even non-Lipschitz minimization problems, which has wide applications in statistics and sparse reconstruction. At each step, the SSQP algorithm solves a strongly convex quadratic minimization problem with a diagonal Hessian matrix, which has a simple … Read more

A von Neumann Alternating Method for Finding Common Solutions to Variational Inequalities

Modifying von Neumann’s alternating projections algorithm, we obtain an alternating method for solving the recently introduced Common Solutions to Variational Inequalities Problem (CSVIP). For simplicity, we mainly confine our attention to the two-set CSVIP, which entails finding common solutions to two unrelated variational inequalities in Hilbert space. Citation Nonlinear Analysis Series A: Theory, Methods & … Read more

Constraint Reduction with Exact Penalization for Model-Predictive Rotorcraft Control

Model Predictive Control (also known as Receding Horizon Control (RHC)) has been highly successful in process control applications. Its use for aerospace applications has been hindered by its high computational requirements. In the present paper, we propose using enhanced primal-dual interior-point optimization techniques in the convex-quadratic-program-based RHC control of a rotorcraft. Our enhancements include a … Read more

Reformulation of a model for hierarchical divisive graph modularity maximization

Finding clusters, or communities, in a graph, or network is a very important problem which arises in many domains. Several models were proposed for its solution. One of the most studied and exploited is the maximization of the so called modularity, which represents the sum over all communities of the fraction of edges within these … Read more

A PROBABILITY-DRIVEN SEARCH ALGORITHM FOR SOLVING MULTI-OBJECTIVE OPTIMIZATION PROBLEMS

This paper proposes a new probabilistic algorithm for solving multi-objective optimization problems – Probability-Driven Search Algorithm. The algorithm uses probabilities to control the process in search of Pareto optimal solutions. Especially, we use the absorbing Markov Chain to argue the convergence of the algorithm. We test this approach by implementing the algorithm on some benchmark … Read more

Globally Convergent Evolution Strategies and CMA-ES

In this paper we show how to modify a large class of evolution strategies (ES) to rigorously achieve a form of global convergence, meaning convergence to stationary points independently of the starting point. The type of ES under consideration recombine the parents by means of a weighted sum, around which the offsprings are computed by … Read more