Accelerated fast iterative shrinkage thresholding algorithms for sparsity-regularized cone-beam CT image reconstruction

Purpose: The development of iterative image reconstruction algorithms for cone-beam computed tomography (CBCT) remains an active and important research area. Even with hardware acceleration, the overwhelming majority of the available 3D iterative algorithms that implement nonsmooth regularizers remain computationally burdensome and have not been translated for routine use in time-sensitive applications such as image-guided radiation … Read more

A Second-Order Cone Based Approach for Solving the Trust Region Subproblem and Its Variants

We study the trust region subproblem (TRS) of minimizing a nonconvex quadratic function over the unit ball with additional conic constraints. Despite having a nonconvex objective, it is known that the TRS and a number of its variants are polynomial-time solvable. In this paper, we follow a second-order cone based approach to derive an exact … Read more

On Sampling Rates in Simulation-Based Recursions

We consider the context of “simulation-based recursions,” that is, recursions that involve quantities needing to be estimated using a stochastic simulation. Examples include stochastic adaptations of fixed-point and gradient descent recursions obtained by replacing function and derivative values appearing within the recursion by their Monte Carlo counterparts. The primary motivating settings are Simulation Optimization and … Read more

Three Enhancements for Optimization-Based Bound Tightening

Optimization-based bound tightening (OBBT) is one of the most effective procedures to reduce variable domains of nonconvex mixed-integer nonlinear programs (MINLPs). At the same time it is one of the most expensive bound tightening procedures, since it solves auxiliary linear programs (LPs)—up to twice the number of variables many. The main goal of this paper … Read more

Coordinate Friendly Structures, Algorithms and Applications

This paper focuses on coordinate update methods, which are useful for solving problems involving large or high-dimensional datasets. They decompose a problem into simple subproblems, where each updates one, or a small block of, variables while fixing others. These methods can deal with linear and nonlinear mappings, smooth and nonsmooth functions, as well as convex … Read more

A robust Lagrangian-DNN method for a class of quadratic optimization problems

The Lagrangian-doubly nonnegative (DNN) relaxation has recently been shown to provide effective lower bounds for a large class of nonconvex quadratic optimization problems (QOPs) using the bisection method combined with first-order methods by Kim, Kojima and Toh in 2016. While the bisection method has demonstrated the computational efficiency, determining the validity of a computed lower … Read more

Approximations and Generalized Newton Methods

We study local convergence of generalized Newton methods for both equations and inclusions by using known and new approximations and regularity properties at the solution. Including Kantorovich-type settings, our goal are statements about all (not only some) Newton sequences with appropriate initial points. Our basic tools are results of Klatte-Kummer (2002) and Kummer (1988, 1995), … Read more

Worst-Case Hardness of Approximation for Sparse Optimization with L0 Norm

In this paper, we consider sparse optimization problems with L0 norm penalty or constraint. We prove that it is strongly NP-hard to find an approximate optimal solution within certain error bound, unless P = NP. This provides a lower bound for the approximation error of any deterministic polynomial-time algorithm. Applying the complexity result to sparse … Read more

Iteration-complexity of a Rockafellar’s proximal method of multipliers for convex programming based on second-order approximations

This paper studies the iteration-complexity of a new primal-dual algorithm based on Rockafellar’s proximal method of multipliers (PMM) for solving smooth convex programming problems with inequality constraints. In each step, either a step of Rockafellar’s PMM for a second-order model of the problem is computed or a relaxed extragradient step is performed. The resulting algorithm … Read more

A Dual Gradient-Projection Method for Large-Scale Strictly Convex Quadratic Problems

The details of a solver for minimizing a strictly convex quadratic objective function subject to general linear constraints is presented. The method uses a gradient projection algorithm enhanced with subspace acceleration to solve the bound-constrained dual optimization problem. Such gradient projection methods are well-known, but are typically employed to solve the primal problem when only … Read more