An Adaptive Trust-Region Method Without Function Evaluations

In this paper we propose an adaptive trust-region method for smooth unconstrained optimization. The update rule for the trust-region radius relies only on gradient evaluations. Assuming that the gradient of the objective function is Lipschitz continuous, we establish worst-case complexity bounds for the number of gradient evaluations required by the proposed method to generate approximate … Read more

On a Computationally Ill-Behaved Bilevel Problem with a Continuous and Nonconvex Lower Level

It is well known that bilevel optimization problems are hard to solve both in theory and practice. In this paper, we highlight a further computational difficulty when it comes to solving bilevel problems with continuous but nonconvex lower levels. Even if the lower-level problem is solved to ɛ-feasibility regarding its nonlinear constraints for an arbitrarily … Read more

Adaptive Nonlinear Optimization of District Heating Networks Based on Model and Discretization Catalogs

We propose an adaptive optimization algorithm for operating district heating networks in a stationary regime. The behavior of hot water flow in the pipe network is modeled using the incompressible Euler equations and a suitably chosen energy equation. By applying different simplifications to these equations, we derive a catalog of models. Our algorithm is based … Read more

A nonlinear conjugate gradient method with complexity guarantees and its application to nonconvex regression

Nonlinear conjugate gradients are among the most popular techniques for solving continuous optimization problems. Although these schemes have long been studied from a global convergence standpoint, their worst-case complexity properties have yet to be fully understood, especially in the nonconvex setting. In particular, it is unclear whether nonlinear conjugate gradient methods possess better guarantees than … Read more

A Limited Memory Subspace Minimization Conjugate Gradient Algorithm for Unconstrained Optimization

Subspace minimization conjugate gradient (SMCG) methods are a class of high potential iterative methods for unconstrained optimization. The orthogonality is an important property of linear conjugate gradient method. It is however observed that the orthogonality of gradients in linear conjugate gradient method is often lost, which usually causes the slow convergence of conjugate gradient method. … Read more

Two efficient gradient methods with approximately optimal stepsizes based on regularization models for unconstrained optimization

It is widely accepted that the stepsize is of great significance to gradient method. Two efficient gradient methods with approximately optimal stepsizes mainly based on regularization models are proposed for unconstrained optimization. More exactly, if the objective function is not close to a quadratic function on the line segment between the current and latest iterates, … Read more

Stable Recovery of Sparse Signals With Non-convex Weighted $r$-Norm Minus $1$-Norm

Given the measurement matrix $A$ and the observation signal $y$, the central purpose of compressed sensing is to find the most sparse solution of the underdetermined linear system $y=Ax+z$, where $x$ is the $s$-sparse signal to be recovered and $z$ is the noise vector. Zhou and Yu \cite{Zhou and Yu 2019} recently proposed a novel … Read more

A barrier Lagrangian dual method for multi-stage stochastic convex semidefinite optimization

In this paper, we present a polynomial-time barrier algorithm for solving multi-stage stochastic convex semidefinite optimization based on the Lagrangian dual method which relaxes the nonanticipativity constraints. We show that the barrier Lagrangian dual functions for our setting form self-concordant families with respect to barrier parameters. We also use the barrier function method to improve … Read more

A minibatch stochastic Quasi-Newton method adapted for nonconvex deep learning problems

In this study, we develop a limited memory nonconvex Quasi-Newton (QN) method, tailored to deep learning (DL) applications. Since the stochastic nature of (sampled) function information in minibatch processing can affect the performance of QN methods, three strategies are utilized to overcome this issue. These involve a novel progressive trust-region radius update (suitable for stochastic … Read more

Inexact Restoration for Minimization with Inexact Evaluation both of the Objective Function and the Constraints

In a recent paper an Inexact Restoration method for solving continuous constrained optimization problems was analyzed from the point of view of worst-case functional complexity and convergence. On the other hand, the Inexact Restoration methodology was employed, in a different research, to handle minimization problems with inexact evaluation and simple constraints. These two methodologies are … Read more