A First Order Algorithm on an Optimization Problem with Improved Convergence when Problem is Convex

We propose a first order algorithm, a modified version of FISTA, to solve an optimization problem with an objective function that is the sum of a possibly nonconvex function, with Lipschitz continuous gradient, and a convex function which can be nonsmooth. The algorithm is shown to have an iteration complexity of \(\mathcal{O}(\epsilon^{-2})\) to find an … Read more

Online Convex Optimization with Heavy Tails: Old Algorithms, New Regrets, and Applications

In Online Convex Optimization (OCO), when the stochastic gradient has a finite variance, many algorithms provably work and guarantee a sublinear regret. However, limited results are known if the gradient estimate has a heavy tail, i.e., the stochastic gradient only admits a finite \(\mathsf{p}\)-th central moment for some \(\mathsf{p}\in\left(1,2\right]\). Motivated by it, this work examines … Read more

The Optimal Smoothings of Sublinear Functions and Convex Cones

This paper considers the problem of smoothing convex functions and sets, seeking the nearest smooth convex function or set to a given one. For convex cones and sublinear functions, a full characterization of the set of all optimal smoothings is given. These provide if and only if characterizations of the set of optimal smoothings for … Read more

On the Resolution of Ties in Fair Convex Allocation Problems

We study the emergence of indistinguishable, but structurally distinct, allocation outcomes in convex resource allocation models. Such outcomes occur when different users receive proportionally identical allocations despite differences in initial conditions, eligibility sets, or priority weights. We formalize this behavior and analyze the structural conditions under which it arises, with a focus on fairness-oriented objectives. … Read more

Second order directional derivative of optimal solution function in parametric programming problem

In this paper, the second-order directional derivative of the optimal value function and the optimal solution function are obtained for a strongly stable parametric problem with non-unique Lagrange multipliers. Some properties of the Lagrange multipliers are proved. It is justified that the second-order directional derivative of the optimal solution function for the parametric problem can … Read more

A Primal Approach to Facial Reduction for SDP Relaxations of Combinatorial Optimization Problems

We propose a novel facial reduction algorithm tailored to semidefinite programming relaxations of combinatorial optimization problems with quadratic objective functions. Our method leverages the specific structure of these relaxations, particularly the availability of feasible solutions that can often be generated efficiently in practice. By incorporating such solutions into the facial reduction process, we substantially simplify … Read more

Non-smooth stochastic gradient descent using smoothing functions

In this paper, we address stochastic optimization problems involving a composition of a non-smooth outer function and a smooth inner function, a formulation frequently encountered in machine learning and operations research. To deal with the non-differentiability of the outer function, we approximate the original non-smooth function using smoothing functions, which are continuously differentiable and approach … Read more

Cooperative vs Noncooperative Scenarios in multi-objective Potential games: the multi-portfolio context

We focus on multi-agent, multi-objective problems, particularly on those where the objectives admit a potential structure. We show that the solution to the potential multi-objective problem is always a noncooperative optimum for the multi-agent setting. Furthermore, we identify a class of problems for which every noncooperative solution can be computed via the potential problem. We … Read more

A new insight on the prediction-correction framework with applications to first-order methods

The prediction-correction framework developed in [B. He, Splitting Contraction Algorithm for Convex Optimization, Science Press, 2025] is a simple yet powerful tool for analyzing the convergence of diverse first-order optimization methods, including the Augmented Lagrangian Method (ALM) and the Alternating Direction Method of Multipliers (ADMM). In this paper, we propose a generalized prediction-correction framework featuring … Read more

A strongly convergent projection and contraction algorithm with extrapolations from the past

This paper introduces a projection and contraction-type algorithm that features an extrapolation from the past, reducing the two values of the cost operator inherent in the original projection and contraction algorithm to a single value at the current iteration. Strong convergence results of the proposed algorithm are proved in Hilbert spaces. Experimental results on testing … Read more