A Linearly Convergent Algorithm for Solving a Class of Nonconvex/Affine Feasibility Problems

We introduce a class of nonconvex/affine feasibility problems, called (NCF), that consists of finding a point in the intersection of affine constraints with a nonconvex closed set. This class captures some interesting fundamental and NP hard problems arising in various application areas such as sparse recovery of signals and affine rank minimization that we briefly … Read more

Nonparametric Estimation via Convex Programming

In the paper, we focus primarily on the problem of recovering a linear form g’*x of unknown “signal” x known to belong to a given convex compact set X in R^n from N independent realizations of a random variable taking values in a finite set, the distribution p of the variable being affinely parameterized by … Read more

A First-Order Framework for Inverse Imaging Problems

We argue that some inverse problems arising in imaging can be efficiently treated using only single-precision (or other reduced-precision) arithmetic, using a combination of old ideas (first-order methods, polynomial preconditioners), and new ones (bilateral filtering, total variation). Using single precision, and having structures which parallelize in the ways needed to take advantage of low-cost/high-performance multi-core/SIMD … Read more

A VARIATIONAL FORMULATION FOR FRAME-BASED INVERSE PROBLEMS

A convex variational framework is proposed for solving inverse problems in Hilbert spaces with a priori information on the representation of the target solution in a frame. The objective function to be minimized consists of a separable term penalizing each frame coefficient individually and of a smooth term modeling the data formation model as well … Read more

Inverse Stochastic Linear Programming

Inverse optimization perturbs objective function to make an initial feasible solution optimal with respect to perturbed objective function while minimizing cost of perturbation. We extend inverse optimization to two-stage stochastic linear programs. Since the resulting model grows with number of scenarios, we present two decomposition approaches for solving these problems. Citation Unpublished: 07-1, University of … Read more

Coordinate and Subspace Optimization Methods for Linear Least Squares with Non-Quadratic Regularization

This work addresses the problem of regularized linear least squares (RLS) with non-quadratic separable regularization. Despite being frequently deployed in many applications, the RLS problem is often hard to solve using standard iterative methods. In a recent work [10], a new iterative method called Parallel Coordinate Descent (PCD) was devised. We provide herein a convergence … Read more

Inherent smoothness of intensity patterns for intensity modulated radiation therapy generated by simultaneous projection algorithms

The efficient delivery of intensity modulated radiation therapy (IMRT) depends on finding optimized beam intensity patterns that produce dose distributions, which meet given constraints for the tumor as well as any critical organs to be spared. Many optimization algorithms that are used for beamlet-based inverse planning are susceptible to large variations of neighboring intensities. Accurately … Read more

Mathematical optimization for the inverse problem of intensity modulated radiation therapy

In this tutorial we discuss modeling issues in intensity modulated radiation therapy, contrasting the continuous model with the fully-discretized one and considering feasibility formulations versus optimization setups. We review briefly some mathematical optimization techniques for IMRT. These include global optimization, multi-objective optimization, linear and mixed integer programming and projection methods. Citation in: J.R. Palta and … Read more

Quasi-Newton methods for large-scale distributed parameter estimation

We develop Quasi-Newton methods for distributed parameter estimation problems, where the forward problem is governed by a set of partial differential equations. A Tikhonov style regularization approach yields an optimization problem with a special structure, where the gradients are calculated using the adjoint method. In many cases standard Quasi-Newton methods (such as L-BFGS) are not … Read more

The Inverse Optimal Value Problem

This paper considers the following inverse optimization problem: given a linear program, a desired optimal objective value, and a set of feasible cost coefficients, determine a cost-coefficient vector such that the corresponding optimal objective value of the linear program is closest to the given value. The above problem, referred here as the inverse optimal value … Read more