Convergence Rate of Projected Subgradient Method with Time-varying Step-sizes

We establish the optimal ergodic convergence rate for the classical projected subgradient method with time-varying step-sizes. This convergence rate remains the same even if we slightly increase the weight of the most recent points, thereby relaxing the ergodic sense. ArticleDownload View PDF

Convergence of the Chambolle–Pock Algorithm in the Absence of Monotonicity

The Chambolle-Pock algorithm (CPA), also known as the primal-dual hybrid gradient method (PDHG), has surged in popularity in the last decade due to its success in solving convex/monotone structured problems. This work provides convergence results for problems with varying degrees of (non)monotonicity, quantified through a so-called oblique weak Minty condition on the associated primal-dual operator. … Read more

Doubly stochastic primal dual splitting algorithm with variance reduction for saddle point problems

The structured saddle-point problem involving the infimal convolution in real Hilbert spaces finds applicability in many applied mathematics disciplines. For this purpose, we develop a stochastic primal-dual splitting algorithm with loopless variance-reduction for solving this generic problem. We first prove the weak almost sure convergence of the iterates. We then demonstrate that our algorithm achieves … Read more

The convergence rate of the Sandwiching algorithm for convex bounded multiobjective optimization

Sandwiching algorithms, also known as Benson-type algorithms, approximate the nondominated set of convex bounded multiobjective optimization problems by constructing and iteratively improving polyhedral inner and outer approximations. Using a set-valued metric, an estimate of the approximation quality is determined as the distance between the inner and outer approximation. The convergence of the algorithm is evaluated … Read more

Singular value half thresholding algorithm for lp regularized matrix optimization problems

In this paper, we study the low-rank matrix optimization problem, where the penalty term is the $\ell_p~(0<p<1)$ regularization. Inspired by the good performance of half thresholding function in sparse/low-rank recovery problems, we propose a singular value half thresholding (SVHT) algorithm to solve the $\ell_p$ regularized matrix optimization problem. The main iteration in SVHT algorithm makes … Read more

Accelerated Gradient Dynamics on Riemannian Manifolds: Faster Rate and Trajectory Convergence

In order to minimize a differentiable geodesically convex function, we study a second-order dynamical system on Riemannian manifolds with an asymptotically vanishing damping term of the form \(\alpha/t\). For positive values of \(\alpha\), convergence rates for the objective values and convergence of trajectory is derived. We emphasize the crucial role of the curvature of the … Read more

Near-optimal closed-loop method via Lyapunov damping for convex optimization

We introduce an autonomous system with closed-loop damping for first-order convex optimization. While, to this day, optimal rates of convergence are only achieved by non-autonomous methods via open-loop damping (e.g., Nesterov’s algorithm), we show that our system is the first one featuring a closed-loop damping while exhibiting a rate arbitrarily close to the optimal one. … Read more

Distributionally robust optimization through the lens of submodularity

Distributionally robust optimization is used to solve decision making problems under adversarial uncertainty where the distribution of the uncertainty is itself ambiguous. In this paper, we identify a class of these instances that is solvable in polynomial time by viewing it through the lens of submodularity. We show that the sharpest upper bound on the … Read more

Addressing Hierarchical Jointly-Convex Generalized Nash Equilibrium Problems with Nonsmooth Payoffs

We consider a Generalized Nash Equilibrium Problem whose joint feasible region is implicitly defined as the solution set of another Nash game. This structure arises e.g. in multi-portfolio selection contexts, whenever agents interact at different hierarchical levels. We consider nonsmooth terms in all players’ objectives, to promote, for example, sparsity in the solution. Under standard … Read more

Local Convergence Analysis of an Inexact Trust-Region Method for Nonsmooth Optimization

In [R. J. Baraldi and D. P. Kouri, Mathematical Programming, (2022), pp. 1–40], we introduced an inexact trust-region algorithm for minimizing the sum of a smooth nonconvex function and a nonsmooth convex function in Hilbert space—a class of problems that is ubiquitous in data science, learning, optimal control, and inverse problems. This algorithm has demonstrated … Read more