Distributed Task Assignment in a Swarm of UAVs

We consider the problem of distributed task assignment in a swarm of Unmanned Aerial Vehicles (UAVs), where heterogeneous agents that can have different capabilities need to work in teams to execute tasks. We consider the case where communication between UAVs is costly or dangerous and should be limited or avoided, while individual UAVs have uncertain … Read more

Set System Approximation for Binary Integer Programs: Reformulations and Applications

Covering and elimination inequalities are central to combinatorial optimization, yet their role has largely been studied in problem-specific settings or via no-good cuts. This paper introduces a unified perspective that treats these inequalities as primitives for set system approximation in binary integer programs (BIPs). We show that arbitrary set systems admit tight inner and outer … Read more

On Averaging and Extrapolation for Gradient Descent

This work considers the effect of averaging, and more generally extrapolation, of the iterates of gradient descent in smooth convex optimization. After running the method, rather than reporting the final iterate, one can report either a convex combination of the iterates (averaging) or a generic combination of the iterates (extrapolation). For several common stepsize sequences, … Read more

On Coupling Constraints in Linear Bilevel Optimization

It is well-known that coupling constraints in linear bilevel optimization can lead to disconnected feasible sets, which is not possible without coupling constraints. However, there is no difference between linear bilevel problems with and without coupling constraints w.r.t. their complexity-theoretical hardness. In this note, we prove that, although there is a clear difference between these … Read more

Problem-Parameter-Free Decentralized Nonconvex Stochastic Optimization

Existing decentralized algorithms usually require knowledge of problem parameters for updating local iterates. For example, the hyperparameters (such as learning rate) usually require the knowledge of Lipschitz constant of the global gradient or topological information of the communication networks, which are usually not accessible in practice. In this paper, we propose D-NASA, the first algorithm … Read more

A new proximal gradient algorithm for solving mixed variational inequality problems with a novel explicit stepsize and applications

In this paper, we propose a new algorithm for solving monotone mixed variational inequality problems in real Hilbert spaces based on proximal gradient method. Our new algorithm uses a novel explicit stepsize which is proved to be increasing to a positive value. This property plays an important role in improving the speed of the algorithm. … Read more

Variance Reduction and Low Sample Complexity in Stochastic Optimization via Proximal Point Method

This paper proposes a stochastic proximal point method to solve a stochastic convex composite optimization problem. High probability results in stochastic optimization typically hinge on restrictive assumptions on the stochastic gradient noise, for example, sub-Gaussian distributions. Assuming only weak conditions such as bounded variance of the stochastic gradient, this paper establishes a low sample complexity … Read more

T-semidefinite programming relaxation with third-order tensors for constrained polynomial optimization

We study T-semidefinite programming (SDP) relaxation for constrained polynomial optimization problems (POPs). T-SDP relaxation for unconstrained POPs was introduced by Zheng, Huang and Hu in 2022. In this work, we propose a T-SDP relaxation for POPs with polynomial inequality constraints and show that the resulting T-SDP relaxation formulated with third-order tensors can be transformed into … Read more

Riemannian trust-region methods for strict saddle functions with complexity guarantees

The difficulty of minimizing a nonconvex function is in part explained by the presence of saddle points. This slows down optimization algorithms and impacts worst-case complexity guarantees. However, many nonconvex problems of interest possess a favorable structure for optimization, in the sense that saddle points can be escaped efficiently by appropriate algorithms. This strict saddle … Read more