Effectively managing diagnostic tests to monitor the COVID-19 outbreak in Italy

Urged by the outbreak of the COVID-19 in Italy, this study aims at helping to tackle the spread of the disease by resorting to operations research techniques. In particular, we propose a mathematical program to model the problem of establishing how many diagnostic tests the Italian regions must perform in order to maximize the overall … Read more

Geometry of First-Order Methods and Adaptive Acceleration

First-order operator splitting methods are ubiquitous among many fields through science and engineering, such as inverse problems, signal/image processing, statistics, data science and machine learning, to name a few. In this paper, we study a geometric property of first-order methods when applying to solve non-smooth optimization problems. With the tool of “partial smoothness”, we design … Read more

Orthogonal projection algorithm for projecting onto a fnitely generated cone

In this paper, an algorithm is proposed to find the nearest point of a convex cone to a given vector, which is composed of a series of orthogonal projections. Some properties of this algorithm, including the reasonability of implementation, the global convergence property and the finite termination, etc., are obtained. The proposed algorithm is more … Read more

A Hybrid Gradient Method for Strictly Convex Quadratic Programming

In this paper, a reliable hybrid algorithm for solving convex quadratic minimization problems is presented. At each iteration, two points are computed: first, an auxiliary point $\dot{x}_k$ is generated by performing a gradient step equipped with an optimal steplength, then, the next iterate $x_{k+1}$ is obtained through a weighted sum of $\dot{x}_k$ with the penultimate … Read more

Optimal Learning for Structured Bandits

We study structured multi-armed bandits, which is the problem of online decision-making under uncertainty in the presence of structural information. In this problem, the decision-maker needs to discover the best course of action despite observing only uncertain rewards over time. The decision- maker is aware of certain structural information regarding the reward distributions and would … Read more

A parallel splitting ALM-based algorithm for separable convex programming

The augmented Lagrangian method (ALM) provides a benchmark for tackling the canonical convex minimization problem with linear constraints. We consider a special case where the objective function is the sum of $m$ individual subfunctions without coupled variables. The recent study reveals that the direct extension of ALM for separable convex programming problems is not necessarily … Read more

Nearly optimal first-order methods for convex optimization under gradient norm measure: An adaptive regularization approach

In the development of first-order methods for smooth (resp., composite) convex optimization problems minimizing smooth functions, the gradient (resp., gradient mapping) norm is a fundamental optimality measure for which a regularization technique of first-order methods is known to be nearly optimal. In this paper, we report an adaptive regularization approach attaining this iteration complexity without … Read more

A Limiting Analysis on Regularization of Singular SDP and its Implication to Infeasible Interior-point Algorithms

We consider primal-dual pairs of semidefinite programs and assume that they are ill-posed, i.e., both primal and dual are either weakly feasible or weakly infeasible. Under such circumstances, strong duality may break down and the primal and dual might have a nonzero duality gap. Nevertheless, there are arbitrary small perturbations to the problem data which … Read more

A New Preconditioning Approach for an Interior Point-Proximal Method of Multipliers for Linear and Convex Quadratic Programming

In this paper, we address the efficient numerical solution of linear and quadratic programming problems, often of large scale. With this aim, we devise an infeasible interior point method, blended with the proximal method of multipliers, which in turn results in a primal-dual regularized interior point method. Application of this method gives rise to a … Read more

A subspace-accelerated split Bregman method for sparse data recovery with joint l1-type regularizers

We propose a subspace-accelerated Bregman method for the linearly constrained minimization of functions of the form f(u)+tau_1 ||u||_1 + tau_2 ||D*u||_1, where f is a smooth convex function and D represents a linear operator, e.g. a finite difference operator, as in anisotropic Total Variation and fused-lasso regularizations. Problems of this type arise in a wide … Read more