FISTA and Extensions – Review and New Insights

The purpose of this technical report is to review the main properties of an accelerated composite gradient (ACG) method commonly referred to as the Fast Iterative Shrinkage-Thresholding Algorithm (FISTA). In addition, we state a version of FISTA for solving both convex and strongly convex composite minimization problems and derive its iteration complexities to generate iterates … Read more

A new stopping criterion for Krylov solvers applied in Interior Point Methods

A surprising result is presented in this paper with possible far reaching consequences for any optimization technique which relies on Krylov subspace methods employed to solve the underlying linear equation systems. In this paper the advantages of the new technique are illustrated in the context of Interior Point Methods (IPMs). When an iterative method is … Read more

Dealing with inequality constraints in large-scale semidefinite relaxations for graph coloring and maximum clique problems

Semidefinite programs (SDPs) can be solved in polynomial time by interior point methods. However, when the dimension of the problem gets large, interior point methods become impractical in terms of both computational time and memory requirements. Certain first-order methods, such as Alternating Direction Methods of Multipliers (ADMMs), established as suitable algorithms to deal with large-scale … Read more

Frank-Wolfe and friends: a journey into projection-free first-order optimization methods

Invented some 65 years ago in a seminal paper by Marguerite Straus-Frank and Philip Wolfe, the Frank-Wolfe method recently enjoys a remarkable revival, fuelled by the need of fast and reliable first-order optimization methods in Data Science and other relevant application areas. This review tries to explain the success of this approach by illustrating versatility … Read more

Analysis of the Frank-Wolfe Method for Convex Composite Optimization involving a Logarithmically-Homogeneous Barrier

We present and analyze a new generalized Frank-Wolfe method for the composite optimization problem (P): F*:= min_x f(Ax) + h(x), where f is a \theta-logarithmically-homogeneous self-concordant barrier and the function h has bounded domain but is possibly non-smooth. We show that our generalized Frank-Wolfe method requires O((Gap_0 + \theta + Var_h)\ln(\delta_0) + (\theta + Var_h)^2/\epsilon) … Read more

Practical Large-Scale Linear Programming using Primal-Dual Hybrid Gradient

We present PDLP, a practical first-order method for linear programming (LP) that can solve to the high levels of accuracy that are expected in traditional LP applications. In addition, it can scale to very large problems because its core operation is matrix-vector multiplications. PDLP is derived by applying the primal-dual hybrid gradient (PDHG) method, popularized … Read more

Hashing embeddings of optimal dimension, with applications to linear least squares

The aim of this paper is two-fold: firstly, to present subspace embedding properties for s-hashing sketching matrices, with $s\geq 1$, that are optimal in the projection dimension $m$ of the sketch, namely, $m=O(d)$, where $d$ is the dimension of the subspace. A diverse set of results are presented that address the case when the input … Read more

MIMO Radar Optimization With Constant-Modulus and Any p-Norm Similarity Constraints

MIMO radar plays a key role in autonomous driving, and the similarity waveform constraint is an important constraint for radar waveform design. However, the joint constant-modulus and similarity constraint is a difficult constraint. Only the special case with $\infty$-norm similarity and constant-modulus constraints is tackled by the semidefinite relaxation (SDR) and the successive quadratic refinement … Read more

Optimal Convergence Rates for the Proximal Bundle Method

We study convergence rates of the classic proximal bundle method for a variety of nonsmooth convex optimization problems. We show that, without any modification, this algorithm adapts to converge faster in the presence of smoothness or a Hölder growth condition. Our analysis reveals that with a constant stepsize, the bundle method is adaptive, yet it … Read more

An Accelerated Minimal Gradient Method with Momentum for Convex Quadratic Optimization

In this article we address the problem of minimizing a strictly convex quadratic function using a novel iterative method. The new algorithm is based on the well–known Nesterov’s accelerated gradient method. At each iteration of our scheme, the new point is computed by performing a line–search scheme using a search direction given by a linear … Read more