Addressing Estimation Errors through Robust Portfolio Optimization

It is well known that the performance of the classical Markowitz model for portfolio optimization is extremely sensitive to estimation errors on the expected asset returns. Robust optimization mitigates this issue. We focus on ellipsoidal uncertainty sets around a point estimate of the expected asset returns. An important issue is the choice of the parameters … Read more

An inertial Riemannian gradient ADMM for nonsmooth manifold optimization

The Alternating Direction Method of Multipliers (ADMM) is widely recognized for its efficiency in solving separable optimization problems. However, its application to optimization on Riemannian manifolds remains a significant challenge. In this paper, we propose a novel inertial Riemannian gradient ADMM (iRG-ADMM) to solve Riemannian optimization problems with nonlinear constraints. Our key contributions are as … Read more

How Stringent is the Linear Independence Kink Qualification in Abs-Smooth Optimization?

Abs-smooth functions are given by compositions of smooth functions and the evaluation of the absolute value. The linear independence kink qualification (LIKQ) is a fundamental assumption in optimization problems governed by these abs-smooth functions, generalizing the well-known LICQ from smooth optimization. In particular, provided that LIKQ holds it is possible to derive optimality conditions for … Read more

A proximal alternating direction method of multipliers with a proximal-perturbed Lagrangian function for nonconvex and nonsmooth structured optimization

Building on [J. Glob. Optim. 89 (2024) 899–926], we continue to focus on solving a nonconvex and nonsmooth structured optimization problem with linear and closed convex set constraints, where its objective function is the sum of a convex (possibly nonsmooth) function and a smooth (possibly nonconvex) function. Based on the traditional augmented Lagrangian construction, we … Read more

Accelerating Proximal Gradient Descent via Silver Stepsizes

Surprisingly, recent work has shown that gradient descent can be accelerated without using momentum–just by judiciously choosing stepsizes. An open question raised by several papers is whether this phenomenon of stepsize-based acceleration holds more generally for constrained and/or composite convex optimization via projected and/or proximal versions of gradient descent. We answer this in the affirmative … Read more

Acceleration by Random Stepsizes: Hedging, Equalization, and the Arcsine Stepsize Schedule

We show that for separable convex optimization, random stepsizes fully accelerate Gradient Descent. Specifically, using inverse stepsizes i.i.d. from the Arcsine distribution improves the iteration complexity from $O(k)$ to $O(k^{1/2})$, where $k$ is the condition number. No momentum or other algorithmic modifications are required. This result is incomparable to the (deterministic) Silver Stepsize Schedule which … Read more

A proximal-perturbed Bregman ADMM for solving nonsmooth and nonconvex composite optimization

In this paper, we focus on a linearly constrained composite minimization problem involving a possibly nonsmooth and nonconvex objective function. Unlike the traditional construction of the augmented Lagrangian function, we design a proximal-perturbed augmented Lagrangian to develop a new Bregman-type alternating direction method of multipliers. Under mild assumptions, we prove that the augmented Lagrangian sequence … Read more

Reduced Sample Complexity in Scenario-Based Control System Design via Constraint Scaling

The scenario approach is widely used in robust control system design and chance-constrained optimization, maintaining convexity without requiring assumptions about the probability distribution of uncertain parameters. However, the approach can demand large sample sizes, making it intractable for safety-critical applications that require very low levels of constraint violation. To address this challenge, we propose a … Read more

The Proximal Bundle Algorithm Under a Frank-Wolfe Perspective: an Improved Complexity Analysis

The proximal bundle algorithm (PBA) is a fundamental and computationally effective algorithm for solving optimization problems with nonsmooth components. We investigate its convergence rate, focusing on composite settings where one function is smooth and the other is piecewise linear. We interpret a sequence of null steps of the PBA as a Frank-Wolfe algorithm on the … Read more

Jordan and isometric cone automorphisms in Euclidean Jordan algebras

Every symmetric cone K arises as the cone of squares in a Euclidean Jordan algebra V. As V is a real inner-product space, we may denote by Isom(V) its group of isometries. The groups JAut(V) of its Jordan-algebra automorphisms and Aut(K) of the linear cone automorphisms are then related. For certain inner products, JAut(V) = … Read more