Hölder Gradient Descent and Adaptive Regularization Methods in Banach Spaces for First-Order Points

This paper considers optimization of smooth nonconvex functionals in smooth infinite dimensional spaces. A Hölder gradient descent algorithm is first proposed for finding approximate first-order points of regularized polynomial functionals. This method is then applied to analyze the evaluation complexity of an adaptive regularization method which searches for approximate first-order points of functionals with $\beta$-H\”older … Read more

The Impact of Noise on Evaluation Complexity: The Deterministic Trust-Region Case

Intrinsic noise in objective function and derivatives evaluations may cause premature termination of optimization algorithms. Evaluation complexity bounds taking this situation into account are presented in the framework of a deterministic trust-region method. The results show that the presence of intrinsic noise may dominate these bounds, in contrast with what is known for methods in … Read more

Robust Interior Point Method for Quantum Key Distribution Rate Computation

While the security proof method for quantum key distribution, QKD, based on the numerical key rate calculation problem, is powerful in principle, the practicality of the method is limited by computational resources and the efficiency of the underlying algorithm for convex optimization. We derive a stable reformulation of the convex nonlinear semidefinite programming, SDP, model … Read more

Algorithms for Difference-of-Convex (DC) Programs Based on Difference-of-Moreau-Envelopes Smoothing

In this paper we consider minimization of a difference-of-convex (DC) function with and without linear constraints. We first study a smooth approximation of a generic DC function, termed difference-of-Moreau-envelopes (DME) smoothing, where both components of the DC function are replaced by their respective Moreau envelopes. The resulting smooth approximation is shown to be Lipschitz differentiable, … Read more

Penetration depth between two convex polyhedra: An efficient global optimization approach

During the detailed design phase of an aerospace program, one of the most important consistency checks is to ensure that no two distinct objects occupy the same physical space. Since exact geometrical modeling is usually intractable, geometry models are discretized, which often introduces small interferences not present in the fully detailed model. In this paper, … Read more

A Proximal Quasi-Newton Trust-Region Method for Nonsmooth Regularized Optimization

We develop a trust-region method for minimizing the sum of a smooth term f and a nonsmooth term h, both of which can be nonconvex. Each iteration of our method minimizes apossibly nonconvex model of f+h in a trust region. The model coincides with f+h in value and subdifferential at the center. We establish global … Read more

Scalable adaptive cubic regularization methods

Adaptive cubic regularization (ARC) methods for unconstrained optimization compute steps from linear systems involving a shifted Hessian in the spirit of the Levenberg-Marquardt and trust-region methods. The standard approach consists in performing an iterative search for the shift akin to solving the secular equation in trust-region methods. Such search requires computing the Cholesky factorization of … Read more

Fast cluster detection in networks by first-order optimization

Cluster detection plays a fundamental role in the analysis of data. In this paper, we focus on the use of s-defective clique models for network-based cluster detection and propose a nonlinear optimization approach that efficiently handles those models in practice. In particular, we introduce an equivalent continuous formulation for the problem under analysis, and we … Read more

Complexity of a Projected Newton-CG Method for Optimization with Bounds

This paper describes a method for solving smooth nonconvex minimization problems subject to bound constraints with good worst-case complexity and practical performance. The method contains elements of two existing methods: the classical gradient projection approach for bound-constrained optimization and a recently proposed Newton-conjugate gradient algorithm for unconstrained nonconvex optimization. Using a new definition of approximate … Read more

An inexact restoration-nonsmooth algorithm with variable accuracy for stochastic nonsmooth convex optimization problems in machine learning and stochastic linear complementarity problems

We study unconstrained optimization problems with nonsmooth and convex objective function in the form of a mathematical expectation. The proposed method approximates the expected objective function with a sample average function using Inexact Restoration-based adapted sample sizes. The sample size is chosen in an adaptive manner based on Inexact Restoration. The algorithm uses line search … Read more