A FISTA-type first order algorithm on composite optimization problems that is adaptable to the convex situation

In this note, we propose a FISTA-type first order algorithm, VAR-FISTA, to solve a composite optimization problem. A distinctive feature of VAR-FISTA is its ability to exploit the convexity of the function in the problem, resulting in an improved iteration complexity when the function is convex compared to when it is nonconvex. The iteration complexity … Read more

A new binary programming formulation and social choice property for Kemeny rank aggregation

Rank aggregation is widely used in group decision-making and many other applications where it is of interest to consolidate heterogeneous ordered lists. Oftentimes, these rankings may involve a large number of alternatives, contain ties, and/or be incomplete, all of which complicate the use of robust aggregation methods. In particular, these characteristics have limited the applicability … Read more

A disjunctive cut strengthening technique for convex MINLP

Generating polyhedral outer approximations and solving mixed-integer linear relaxations remains one of the main approaches for solving convex mixed-integer nonlinear programming (MINLP) problems. There are several algorithms based on this concept, and the efficiency is greatly affected by the tightness of the outer approximation. In this paper, we present a new framework for strengthening cutting … Read more

Split Bregman iteration for multi-period mean variance portfolio optimization

This paper investigates the problem of defining an optimal long-term investment strategy, where the investor can exit the investment before maturity without severe loss. Our setting is a multi-period one, where the aim is tomake a plan for allocating all of wealth among the n assets within a time horizon of m periods. In addition, … Read more

Globally convergent Newton-type methods for multiobjective optimization

We propose two Newton-type methods for solving (possibly) nonconvex unconstrained multiobjective optimization problems. The first is directly inspired by the Newton method designed to solve convex problems, whereas  the second uses  second-order information of the objective functions with ingredients of the steepest descent method.  One of the key points of our approaches  is to impose … Read more

Decentralized Learning with Lazy and Approximate Dual Gradients

This paper develops algorithms for decentralized machine learning over a network, where data are distributed, computation is localized, and communication is restricted between neighbors. A line of recent research in this area focuses on improving both computation and communication complexities. The methods SSDA and MSDA \cite{scaman2017optimal} have optimal communication complexity when the objective is smooth … Read more

Large Deviation Bounds for Markov Chain Sample Average Approximation via Weak Convergence

A common approach to solve stochastic optimization problems with expectations is to replace the expectations by its sample averages. Large sample asymptotic properties of this approximation are well studied when the sample is i.i.d. In many cases, however, i.i.d. samples are not readily available. On the contrary, one can generate a Harris recurrent Markov chain … Read more

An Improved Analysis of Stochastic Gradient Descent with Momentum

SGD with momentum (SGDM) has been widely applied in many machine learning tasks, and it is often applied with dynamic stepsizes and momentum weights tuned in a stagewise manner. Despite of its empirical advantage over SGD, the role of momentum is still unclear in general since previous analyses on SGDM either provide worse convergence bounds … Read more

A Note on the Integrality Gap of Cutting and Skiving Stock Instances: Why 4/3 is an Upper Bound for the Divisible Case?

In this paper, we consider the (additive integrality) gap of the cutting stock problem (CSP) and the skiving stock problem (SSP). Formally, the gap is defined as the difference between the optimal values of the ILP and its LP relaxation. For both, the CSP and the SSP, this gap is known to be bounded by … Read more

On the weak second-order optimality condition for nonlinear semidefinite and second-order cone programming

Second-order necessary optimality conditions for nonlinear conic programming problems that depend on a single Lagrange multiplier are usually built under nondegeneracy and strict complementarity. In this paper we establish a condition of such type for two classes of nonlinear conic problems, namely semidefinite and second-order cone programming, assuming Robinson’s constraint qualification and a generalized form … Read more