Objective-Function Free Multi-Objective Optimization: Rate of Convergence and Performance of an Adagrad-like algorithm

We propose an Adagrad-like algorithm for multi-objective unconstrained optimization that relies on the computation of a common descent direction only. Unlike classical local algorithms for multi-objective optimization, our approach does not rely on the dominance property to accept new iterates, which allows for a flexible and function-free optimization framework. New points are obtained using an … Read more

An adaptive line-search-free multiobjective gradient method and its iteration-complexity analysis

This work introduces an Adaptive Line-Search-Free Multiobjective Gradient (AMG) method for solving smooth multiobjective optimization problems. The proposed approach automatically adjusts stepsizes based on steepest descent directions, promoting robustness with respect to stepsize choice while maintaining low computational cost. The method is specifically tailored to the multiobjective setting and does not rely on function evaluations, … Read more

On Approximate Computation of Critical Points

We show that computing even very coarse approximations of critical points is intractable for simple classes of nonconvex functions. More concretely, we prove that if there exists a polynomial-time algorithm that takes as input a polynomial in \(n\) variables of constant degree (as low as three) and outputs a point whose gradient has Euclidean norm … Read more

Iteration complexity of the Difference-of-Convex Algorithm for unconstrained optimization: a simple proof

We propose a simple proof of the worst-case iteration complexity for the Difference of Convex functions Algorithm (DCA) for unconstrained minimization, showing that the global rate of convergence of the norm of the objective function’s gradients at the iterates converge to zero like $o(1/k)$. A small example is also provided indicating that the rate cannot … Read more

A Majorization-Minimization approach for multiclass classification in a big data scenario

This work presents a novel optimization approach for training linear classifiers in multiclass classification tasks, when focusing on a regularized and smooth Weston-Watkins support vector machine (SVM) model. We propose a Majorization-Minimization (MM) algorithm to solve the resulting, Lipschitz-differentiable, optimization problem. To enhance scalability of the algorithm when tackling large datasets, we introduce an incremental … Read more

Extended-Krylov-subspace methods for trust-region and norm-regularization subproblems

We consider an effective new method for solving trust-region and norm-regularization problems that arise as subproblems in many optimization applications. We show that the solutions to such subproblems lie on a manifold of approximately very low rank as a function of their controlling parameters (trust-region radius or regularization weight). Based on this, we build a … Read more

Lyapunov-based Analysis on First Order Method for Composite Strong-Weak Convex Functions

The Nesterov’s accelerated gradient (NAG) method generalizes the classical gradient descent algorithm by improving the convergence rate from $\mathcal{O}\left(\frac{1}{t}\right)$ to $\mathcal{O}\left(\frac{1}{t^2}\right)$ in convex optimization. This study examines the proximal gradient framework for additively separable composite functions with smooth and non-smooth components. We demonstrate that Nesterov’s accelerated proximal gradient (NAPG$_\alpha$) method attains a convergence rate of … Read more

Optimization in Theory and Practice

Algorithms for continuous optimization problems have a rich history of design and innovation over the past several decades, in which mathematical analysis of their convergence and complexity properties plays a central role. Besides their theoretical properties, optimization algorithms are interesting also for their practical usefulness as computational tools for solving real-world problems. There are often … Read more

Continuous-time Analysis of a Stochastic ADMM Method for Nonconvex Composite Optimization

In this paper, we focus on nonconvex composite optimization, whose objective is the sum of a smooth but possibly nonconvex function and a composition of a weakly convex function coupled with a linear operator. By leveraging a smoothing technique based on Moreau envelope, we propose a stochastic proximal linearized ADMM algorithm (SPLA). To understand its … Read more

Active-set Newton-MR methods for nonconvex optimization problems with bound constraints

This paper presents active-set methods for minimizing nonconvex twice-continuously differentiable functions subject to bound constraints. Within the faces of the feasible set, we employ descent methods with Armijo line search, utilizing approximated Newton directions obtained through the Minimum Residual (MINRES) method. To escape the faces, we investigate the use of the Spectral Projected Gradient (SPG) … Read more