Polyconvex double well functions

We investigate polyconvexity of the double well function $f(X) := |X-X_1|^2|X-X_2|^2$ for given matrices $X_1, X_2 \in \R^{n \times n}$. Such functions are fundamental in the modeling of phase transitions in materials, but their non-convex nature presents challenges for the analysis of variational problems. We prove that $f$ is polyconvex if and only if the … Read more

On the boundedness of multipliers in augmented Lagrangian methods for mathematical programs with complementarity constraints

In this paper, we present a theoretical analysis of augmented Lagrangian (AL) methods applied to mathematical programs with complementarity constraints (MPCCs). Our focus is on a variant that reformulates the complementarity constraints using slack variables, where these constraints are handled directly in the subproblems rather than being penalized. We introduce specialized constraint qualifications (CQs) of … Read more

A First Order Algorithm on an Optimization Problem with Improved Convergence when Problem is Convex

We propose a first order algorithm, a modified version of FISTA, to solve an optimization problem with an objective function that is the sum of a possibly nonconvex function, with Lipschitz continuous gradient, and a convex function which can be nonsmooth. The algorithm is shown to have an iteration complexity of \(\mathcal{O}(\epsilon^{-2})\) to find an … Read more

rAdam: restart Adam method to escape from local minima for bound constrained non-linear optimization problems

This paper presents a restart version of the Adaptive Moment Estimation (Adam) method for bound constrained nonlinear optimization problems. It aims to avoid getting trapped in a local minima and enable exploration the global optimum. The proposed method combines an adapted restart strategy coupling with barrier methodology to handle the bound constraints. Computational comparison with … Read more

ASPEN: An Additional Sampling Penalty Method for Finite-Sum Optimization Problems with Nonlinear Equality Constraints

We propose a novel algorithm for solving non-convex, nonlinear equality-constrained finite-sum optimization problems. The proposed algorithm incorporates an additional sampling strategy for sample size update into the well-known framework of quadratic penalty methods. Thus, depending on the problem at hand, the resulting method may exhibit a sample size strategy ranging from a mini-batch on one … Read more

A linesearch-based derivative-free method for noisy black-box problems

In this work we consider unconstrained optimization problems. The objective function is known through a zeroth order stochastic oracle that gives an estimate of the true objective function. To solve these problems, we propose a derivativefree algorithm based on extrapolation techniques. Under reasonable assumptions we are able to prove convergence properties for the proposed algorithms. … Read more

On Relatively Smooth Optimization over Riemannian Manifolds

We study optimization over Riemannian embedded submanifolds, where the objective function is relatively smooth in the ambient Euclidean space. Such problems have broad applications but are still largely unexplored. We introduce two Riemannian first-order methods, namely the retraction-based and projection-based Riemannian Bregman gradient methods, by incorporating the Bregman distance into the update steps. The retraction-based … Read more

Primal-dual global convergence of an augmented Lagrangian method under the error bound condition

This work investigates global convergence properties of a safeguarded augmented Lagrangian method applied to nonlinear programming problems, with an emphasis on the role of constraint qualifications in ensuring boundedness of the Lagrange multiplier estimates, also known as dual sequences. When functions with locally Lipschitz continuous derivatives define the constraint set, we prove that the Error … Read more

On the Convergence and Complexity of Proximal Gradient and Accelerated Proximal Gradient Methods under Adaptive Gradient Estimation

In this paper, we propose a proximal gradient method and an accelerated proximal gradient method for solving composite optimization problems, where the objective function is the sum of a smooth and a convex, possibly nonsmooth, function. We consider settings where the smooth component is either a finite-sum function or an expectation of a stochastic function, … Read more

Faster stochastic cubic regularized Newton methods with momentum

Cubic regularized Newton (CRN) methods have attracted significant research interest because they offer stronger solution guarantees and lower iteration complexity. With the rise of the big-data era, there is growing interest in developing stochastic cubic regularized Newton (SCRN) methods that do not require exact gradient and Hessian evaluations. In this paper, we propose faster SCRN … Read more