High-order Evaluation Complexity of a Stochastic Adaptive Regularization Algorithm for Nonconvex Optimization Using Inexact Function Evaluations and Randomly Perturbed Derivatives

A stochastic adaptive regularization algorithm allowing random noise in derivatives and inexact function values is proposed for computing strong approximate minimizers of any order for inexpensively constrained smooth optimization problems. For an objective function with Lipschitz continuous p-th derivative in a convex neighbourhood of the feasible set and given an arbitrary optimality order q, it … Read more

A Limiting Analysis on Regularization of Singular SDP and its Implication to Infeasible Interior-point Algorithms

We consider primal-dual pairs of semidefinite programs and assume that they are ill-posed, i.e., both primal and dual are either weakly feasible or weakly infeasible. Under such circumstances, strong duality may break down and the primal and dual might have a nonzero duality gap. Nevertheless, there are arbitrary small perturbations to the problem data which … Read more

On the Cluster-aware Supervised Learning (CluSL): Frameworks, Convergent Algorithms, and Applications

This paper proposes a cluster-aware supervised learning (CluSL) framework, which integrates the clustering analysis with supervised learning (SL). The objective of CluSL is to simultaneously find the best clusters of the data points and minimize the sum of loss functions within each cluster. This framework has many potential applications in healthcare, operations management, manufacturing, and … Read more

Objective Selection for Cancer Treatment: An Inverse Optimization Approach

In radiation therapy treatment-plan optimization, selecting a set of clinical objectives that are tractable and parsimonious yet effective is a challenging task. In clinical practice, this is typically done by trial and error based on the treatment planner’s subjective assessment, which often makes the planning process inefficient and inconsistent. We develop the objective selection problem … Read more

Deep Unfolding of a Proximal Interior Point Method for Image Restoration

Variational methods are widely applied to ill-posed inverse problems for they have the ability to embed prior knowledge about the solution. However, the level of performance of these methods significantly depends on a set of parameters, which can be estimated through computationally expensive and time-consuming methods. In contrast, deep learning offers very generic and efficient … Read more

Adaptive regularization algorithms with inexact evaluations for nonconvex optimization

A regularization algorithm using inexact function values and inexact derivatives is proposed and its evaluation complexity analyzed. This algorithm is applicable to unconstrained problems and to problems with inexpensive constraints (that is constraints whose evaluation and enforcement has negligible cost) under the assumption that the derivative of highest degree is beta-H\”{o}lder continuous. It features a … Read more

Sharp worst-case evaluation complexity bounds for arbitrary-order nonconvex optimization with inexpensive constraints

We provide sharp worst-case evaluation complexity bounds for nonconvex minimization problems with general inexpensive constraints, i.e.\ problems where the cost of evaluating/enforcing of the (possibly nonconvex or even disconnected) constraints, if any, is negligible compared to that of evaluating the objective function. These bounds unify, extend or improve all known upper and lower complexity bounds … Read more

On the complexity of an Inexact Restoration method for constrained optimization

Recent papers indicate that some algorithms for constrained optimization may exhibit worst-case complexity bounds that are very similar to those of unconstrained optimization algorithms. A natural question is whether well established practical algorithms, perhaps with small variations, may enjoy analogous complexity results. In the present paper we show that the answer is positive with respect … Read more

A stochastic Levenberg-Marquardt method using random models with complexity results and application to data assimilation

Globally convergent variants of the Gauss-Newton algorithm are often the methods of choice to tackle nonlinear least-squares problems. Among such frameworks, Levenberg-Marquardt and trust-region methods are two well-established, similar paradigms. Both schemes have been studied when the Gauss-Newton model is replaced by a random model that is only accurate with a given probability. Trust-region schemes … Read more

On an Elliptical Trust-Region Procedure for Ill-Posed Nonlinear Least-Squares Problems

In this paper we address the stable numerical solution of ill-posed nonlinear least-squares problems with small residual. We propose an elliptical trust-region reformulation of a Levenberg-Marquardt procedure. Thanks to an appropriate choice of the trust-region radius, the proposed procedure guarantees an automatic choice of the free regularization parameters that, together with a suitable stopping criterion, … Read more