A low-rank augmented Lagrangian method for large-scale semidefinite programming based on a hybrid convex-nonconvex approach

\(\) This paper introduces HALLaR, a new first-order method for solving large-scale semidefinite programs (SDPs) with bounded domain. HALLaR is an inexact augmented Lagrangian (AL) method where the AL subproblems are solved by a novel hybrid low-rank (HLR) method. The recipe behind HLR is based on two key ingredients: 1) an adaptive inexact proximal point … Read more

Noise-Tolerant Optimization Methods for the Solution of a Robust Design Problem

The development of nonlinear optimization algorithms capable of performing reliably in the presence of noise has garnered considerable attention lately. This paper advocates for strategies to create noise-tolerant nonlinear optimization algorithms by adapting classical deterministic methods. These adaptations follow certain design guidelines described here, which make use of estimates of the noise level in the … Read more

Using generalized simplex methods to approximate derivatives

This paper presents two methods for approximating a proper subset of the entries of a Hessian using only function evaluations. These approximations are obtained using the techniques called generalized simplex Hessian and generalized centered simplex Hessian. We show how to choose the matrices of directions involved in the computation of these two techniques depending on … Read more

The cosine measure relative to a subspace

The cosine measure was introduced in 2003 to quantify the richness of a finite positive spanning sets of directions in the context of derivative-free directional methods. A positive spanning set is a set of vectors whose nonnegative linear combinations span the whole space. The present work extends the definition of cosine measure. In particular, the … Read more

Quadratic Optimization Through the Lens of Adjustable Robust Optimization

Quadratic optimization (QO) has been studied extensively in the literature due to its applicability in many practical problems. While practical, it is known that QO problems are generally NP-hard. So, researchers developed many approximation methods to find good solutions. In this paper, we go beyond the norm and analyze QO problems using robust optimization techniques. … Read more

A proof for multilinear error bounds

\(\) We derive the error bounds for multilinear terms in $[0,1]^n$ using a proof methodology based on the polyhedral representation of the convex hull. We extend the result for multilinear terms in $[\boldsymbol{L},\boldsymbol{0}] \times [\boldsymbol{0},\boldsymbol{U}]\subset\mathbb{R}^n$. Article Download View A proof for multilinear error bounds

Block Majorization Minimization with Extrapolation and Application to $\beta$-NMF

We propose a Block Majorization Minimization method with Extrapolation (BMMe) for solving a class of multi-convex optimization problems. The extrapolation parameters of BMMe are updated using a novel adaptive update rule. By showing that block majorization minimization can be reformulated as a block mirror descent method, with the Bregman divergence adaptively updated at each iteration, … Read more

An Inexact Restoration Direct Multisearch Filter Approach to Multiobjective Constrained Derivative-free Optimization

Direct Multisearch (DMS) is a well-established class of methods for multiobjective derivative-free optimization, where constraints are addressed by an extreme barrier approach, only evaluating feasible points. In this work, we propose a filter approach, combined with an inexact feasibility restoration step, to address constraints in the DMS framework. The filter approach treats feasibility as an … Read more

Alternate Training of Shared and Task-Specific Parameters for Multi-Task Neural Networks

This paper introduces novel alternate training procedures for hard-parameter sharing Multi-Task Neural Networks (MTNNs). Traditional MTNN training faces challenges in managing conflicting loss gradients, often yielding sub-optimal performance. The proposed alternate training method updates shared and task-specific weights alternately, exploiting the multi-head architecture of the model. This approach reduces computational costs, enhances training regularization, and … Read more

A relaxed quasinormality condition and the boundedness of dual augmented Lagrangian sequences

Global convergence of augmented Lagrangian methods to a first-order stationary point is well-known to hold under considerably weak constraint qualifications. In particular, several constant rank-type conditions have been introduced for this purpose which turned out to be relevant also beyond this scope. In this paper we show that in fact under these conditions the sequence … Read more