Alternating Iteratively Reweighted \(\ell_1\) and Subspace Newton Algorithms for Nonconvex Sparse Optimization

This paper presents a novel hybrid algorithm for minimizing the sum of a continuously differentiable loss function and a nonsmooth, possibly nonconvex, sparsity‑promoting regularizer. The proposed method adaptively switches between solving a reweighted \(\ell_1\)-regularized subproblem and performing an inexact subspace Newton step. The reweighted \(\ell_1\)-subproblem admits an efficient closed-form solution via the soft-thresholding operator, thereby … Read more

Polyconvex double well functions

We investigate polyconvexity of the double well function $f(X) := |X-X_1|^2|X-X_2|^2$ for given matrices $X_1, X_2 \in \R^{n \times n}$. Such functions are fundamental in the modeling of phase transitions in materials, but their non-convex nature presents challenges for the analysis of variational problems. We prove that $f$ is polyconvex if and only if the … Read more

A First Order Algorithm on an Optimization Problem with Improved Convergence when Problem is Convex

We propose a first order algorithm, a modified version of FISTA, to solve an optimization problem with an objective function that is the sum of a possibly nonconvex function, with Lipschitz continuous gradient, and a convex function which can be nonsmooth. The algorithm is shown to have an iteration complexity of \(\mathcal{O}(\epsilon^{-2})\) to find an … Read more

Online Convex Optimization with Heavy Tails: Old Algorithms, New Regrets, and Applications

In Online Convex Optimization (OCO), when the stochastic gradient has a finite variance, many algorithms provably work and guarantee a sublinear regret. However, limited results are known if the gradient estimate has a heavy tail, i.e., the stochastic gradient only admits a finite \(\mathsf{p}\)-th central moment for some \(\mathsf{p}\in\left(1,2\right]\). Motivated by it, this work examines … Read more

The Optimal Smoothings of Sublinear Functions and Convex Cones

This paper considers the problem of smoothing convex functions and sets, seeking the nearest smooth convex function or set to a given one. For convex cones and sublinear functions, a full characterization of the set of all optimal smoothings is given. These provide if and only if characterizations of the set of optimal smoothings for … Read more

On the Resolution of Ties in Fair Convex Allocation Problems

We study the emergence of indistinguishable, but structurally distinct, allocation outcomes in convex resource allocation models. Such outcomes occur when different users receive proportionally identical allocations despite differences in initial conditions, eligibility sets, or priority weights. We formalize this behavior and analyze the structural conditions under which it arises, with a focus on fairness-oriented objectives. … Read more

Second order directional derivative of optimal solution function in parametric programming problem

In this paper, the second-order directional derivative of the optimal value function and the optimal solution function are obtained for a strongly stable parametric problem with non-unique Lagrange multipliers. Some properties of the Lagrange multipliers are proved. It is justified that the second-order directional derivative of the optimal solution function for the parametric problem can … Read more

A Primal Approach to Facial Reduction for SDP Relaxations of Combinatorial Optimization Problems

We propose a novel facial reduction algorithm tailored to semidefinite programming relaxations of combinatorial optimization problems with quadratic objective functions. Our method leverages the specific structure of these relaxations, particularly the availability of feasible solutions that can often be generated efficiently in practice. By incorporating such solutions into the facial reduction process, we substantially simplify … Read more

Non-smooth stochastic gradient descent using smoothing functions

In this paper, we address stochastic optimization problems involving a composition of a non-smooth outer function and a smooth inner function, a formulation frequently encountered in machine learning and operations research. To deal with the non-differentiability of the outer function, we approximate the original non-smooth function using smoothing functions, which are continuously differentiable and approach … Read more

Cooperative vs Noncooperative Scenarios in multi-objective Potential games: the multi-portfolio context

We focus on multi-agent, multi-objective problems, particularly on those where the objectives admit a potential structure. We show that the solution to the potential multi-objective problem is always a noncooperative optimum for the multi-agent setting. Furthermore, we identify a class of problems for which every noncooperative solution can be computed via the potential problem. We … Read more