New quasi-Newton method for solving systems of nonlinear equations

In this report, we propose the new Broyden method for solving systems of nonlinear equations, which uses the first derivatives, but it is more efficient than the Newton method (measured by the computational time) for larger dense systems. The new method updates QR decompositions of nonsymmetric approximations of the Jacobian matrix, so it requires $O(n^2)$ … Read more

A Levenberg-Marquardt method for large nonlinear least-squares problems with dynamic accuracy in functions and gradients

In this paper we consider large scale nonlinear least-squares problems for which function and gradient are evaluated with dynamic accuracy and propose a Levenberg-Marquardt method for solving such problems. More precisely, we consider the case in which the exact function to optimize is not available or its evaluation is computationally demanding, but ap- proximations of … Read more

Optimality of orders one to three and beyond: characterization and evaluation complexity in constrained nonconvex optimization

Necessary conditions for high-order optimality in smooth nonlinear constrained optimization are explored and their inherent intricacy discussed. A two-phase minimization algorithm is proposed which can achieve approximate first-, second- and third-order criticality and its evaluation complexity is analyzed as a function of the choice (among existing methods) of an inner algorithm for solving subproblems in … Read more

Quasi-Newton methods for constrained nonlinear systems: complexity analysis and application

We address the solution of convex constrained nonlinear systems by new linesearch Quasi-Newton methods. These methods are based on a proper use of the projection map onto the constraint set and on a derivative-free and nonmonotone linesearch strategy. The convergence properties of the proposed methods are presented along with a worst-case iteration complexity bound. Several … Read more

Approximate norm descent methods for constrained nonlinear systems

We address the solution of convex-constrained nonlinear systems of equations where the Jacobian matrix is unavailable or its computation/storage is burdensome. In order to efficiently solve such problems, we propose a new class of algorithms which are “derivative-free” both in the computation of the search direction and in the selection of the steplength. Search directions … Read more

Low-Rank Matrix Completion using Nuclear Norm with Facial Reduction

Minimization of the nuclear norm is often used as a surrogate, convex relaxation, for finding the minimum rank completion (recovery) of a partial matrix. The minimum nuclear norm problem can be solved as a trace minimization semidefinite programming problem (\SDP). The \SDP and its dual are regular in the sense that they both satisfy strict … Read more

Backward Step Control for Global Newton-type Methods

We present and analyze a new damping approach called backward step control for the globalization of the convergence of Newton-type methods for the numerical solution of nonlinear root-finding problems. We provide and discuss reasonable assumptions that imply convergence of backward step control on the basis of generalized Newton paths in conjunction with a backward analysis … Read more

Robust Numerical Calibration for Implied Volatility Expansion Models

Implied volatility expansions allow calibration of sophisticated volatility models. They provide an accurate fit and parametrization of implied volatility surfaces that is consistent with empirical observations. Fine-grained higher order expansions offer a better fit but pose the challenge of finding a robust, stable and computationally tractable calibration procedure due to a large number of market … Read more

New Improved Penalty Methods for Sparse Reconstruction Based on Difference of Two Norms

In this paper, we further establish two types of DC (Difference of Convex functions) programming for $l_0$ sparse reconstruction. Our DC objective functions are specified to the difference of two norms. One is the difference of $l_1$ and $l_{\sigma_q}$ norms (DC $l_1$-$l_{\sigma_q}$ for short) where $l_{\sigma_q}$ is the $l_1$ norm of the $q$-term ($q\geq1$) best … Read more

Alternating direction methods for non convex optimization with applications to second-order least-squares and risk parity portfolio selection

In this paper we mainly focus on optimization of sums of squares of quadratic functions, which we refer to as second-order least-squares problems, subject to convex constraints. Our motivation arises from applications in risk parity portfolio selection. We generalize the setting further by considering a class of nonlinear, non convex functions which admit a (non … Read more