Another Face of DIRECT

It is shown that, contrary to a claim of [D. E. Finkel, and C. T. Kelley, Additive scaling and the DIRECT algorithm, J. Glob. Optim. 36 (2006) 597-608], it is possible to divide the smallest hypercube which contains the low function value by considering hyperrectangles whose points are located on the diagonal of the center … Read more

The Exact Feasibility of Randomized Solutions of Robust Convex Programs

Robust optimization programs are hard to solve even when the constraints are convex. In previous contributions, it has been shown that approximately robust solutions (i.e. solutions feasible for all constraints but a small fraction of them) to convex programs can be obtained at low computational cost through constraints randomization. In this paper, we establish new … Read more

Hybrid extragradient proximal algorithm coupled with parametric approximation and penalty/barrier methods

In this paper we study the hybrid extragradient method coupled with approximation and penalty schemes for minimization problems. Under certain hypotheses, that include for example the case of Tikhonov regularization, we prove convergence of the method to the solution set of our minimization problem. When we use schemes of penalization or barrier we can show … Read more

A Fast Algorithm For Image Deblurring with Total Variation Regularization

We propose and test a simple algorithmic framework for recovering images from blurry and noisy observations based on total variation (TV) regularization when a blurring point-spread function is given. Using a splitting technique, we construct an iterative procedure of alternately solving a pair of easy subproblems associated with an increasing sequence of penalty parameter values. … Read more

A Fixed-Point Continuation Method for l_1-Regularized Minimization with Applications to Compressed Sensing

We consider solving minimization problems with $\ell_1$-regularization: $$\min \|x\|_1 + \mu f(x),$$ particularly for $f(x) = \frac{1}{2}\|Ax-b\|_M^2$ where $A \in \R^{m \times n}$ with $m < n$. Our goal is to construct efficient and robust algorithms for solving large-scale problems with dense data, and our approach is based on two powerful algorithmic ideas, operator-splitting and ... Read more

A gradient-based approach for computing Nash equilibria of large sequential games

We propose a new gradient based scheme to approximate Nash equilibria of large sequential two-player, zero-sum games. The algorithm uses modern smoothing techniques for saddle-point problems tailored specifically for the polytopes used in the Nash equilibrium problem. CitationWorking Paper, Tepper School of Business, Carnegie Mellon UniversityArticleDownload View PDF

Smooth Optimization Approach for Covariance Selection

In this paper we study a smooth optimization approach for solving a class of non-smooth {\it strongly} concave maximization problems. In particular, we apply Nesterov’s smooth optimization technique \cite{Nest83-1,Nest05-1} to their dual counterparts that are smooth convex problems. It is shown that the resulting approach has $\cO(1/{\sqrt{\epsilon}})$ iteration complexity for finding an $\epsilon$-optimal solution to … Read more

In pursuit of a root

The basis pursuit technique is used to find a minimum one-norm solution of an underdetermined least-squares problem. Basis pursuit denoise fits the least-squares problem only approximately, and a single parameter determines a curve that traces the trade-off between the least-squares fit and the one-norm of the solution. We show that the function that describes this … Read more

Explicit reformulations for robust optimization problems with general uncertainty sets

We consider a rather general class of mathematical programming problems with data uncertainty, where the uncertainty set is represented by a system of convex inequalities. We prove that the robust counterparts of this class of problems can be equivalently reformulated as finite and explicit optimization problems. Moreover, we develop simplified reformulations for problems with uncertainty … Read more

Guaranteed Minimum-Rank Solutions of Linear Matrix Equations via Nuclear Norm Minimization

The affine rank minimization problem consists of finding a matrix of minimum rank that satisfies a given system of linear equality constraints. Such problems have appeared in the literature of a diverse set of fields including system identification and control, Euclidean embedding, and collaborative filtering. Although specific instances can often be solved with specialized algorithms, … Read more