Steplength thresholds for invariance preserving of discretization methods of dynamical systems on a polyhedron

Steplength thresholds for invariance preserving of three types of discretization methods on a polyhedron are considered. For Taylor approximation type discretization methods we prove that a valid steplength threshold can be obtained by finding the first positive zeros of a finite number of polynomial functions. Further, a simple and efficient algorithm is proposed to numerically … Read more

Invariance conditions for nonlinear dynamical systems

Recently, Horv\’ath, Song, and Terlaky [\emph{A novel unified approach to invariance condition of dynamical system, submitted to Applied Mathematics and Computation}] proposed a novel unified approach to study, i.e., invariance conditions, sufficient and necessary conditions, under which some convex sets are invariant sets for linear dynamical systems. In this paper, by utilizing analogous methodology, we … Read more

Frechet inequalities via convex optimization

Quantifying the risk carried by an aggregate position $S_d\defn\sum_{i=1}^d X_i$ comprising many risk factors $X_i$ is fundamental to both insurance and financial risk management. Frechet inequalities quantify the worst-case risk carried by the aggregate position given distributional information concerning its composing factors but without assuming independence. This marginal factor modeling of the aggregate position in … Read more

A Simplified Form of Block-Iterative Operator Splitting, and an Asynchronous Algorithm Resembling the Multi-Block ADMM

This paper develops what is essentially a simplified version of the block-iterative operator splitting method already proposed by the author and P. Combettes, but with more general initialization conditions. It then describes one way of implementing this algorithm asynchronously under a computing model inspired by modern HPC environments, which consist of interconnected nodes each having … Read more

1-Bit Compressive Sensing: Reformulation and RRSP-Based Sign Recovery Theory

Recently, the 1-bit compressive sensing (1-bit CS) has been studied in the field of sparse signal recovery. Since the amplitude information of sparse signals in 1-bit CS is not available, it is often the support or the sign of a signal that can be exactly recovered with a decoding method. In this paper, we first … Read more

A bound on the Carathéodory number

The Carathéodory number k(K) of a pointed closed convex cone K is the minimum among all the k for which every element of K can be written as a nonnegative linear combination of at most k elements belonging to extreme rays. Carathéodory’s Theorem gives the bound k(K) <= dim (K). In this work we observe … Read more

On the worst-case complexity of the gradient method with exact line search for smooth strongly convex functions

We consider the gradient (or steepest) descent method with exact line search applied to a strongly convex function with Lipschitz continuous gradient. We establish the exact worst-case rate of convergence of this scheme, and show that this worst-case behavior is exhibited by a certain convex quadratic function. We also extend the result to a noisy … Read more

Exact Worst-case Performance of First-order Methods for Composite Convex Optimization

We provide a framework for computing the exact worst-case performance of any algorithm belonging to a broad class of oracle-based first-order methods for composite convex optimization, including those performing explicit, projected, proximal, conditional and inexact (sub)gradient steps. We simultaneously obtain tight worst-case guarantees and explicit instances of optimization problems on which the algorithm reaches this … Read more

Convergence Analysis of ISTA and FISTA for “Strongly + Semi” Convex Programming

The iterative shrinkage/thresholding algorithm (ISTA) and its faster version FISTA have been widely used in the literature. In this paper, we consider general versions of the ISTA and FISTA in the more general “strongly + semi” convex setting, i.e., minimizing the sum of a strongly convex function and a semiconvex function; and conduct convergence analysis … Read more

Efficient Subgradient Methods for General Convex Optimization

A subgradient method is presented for solving general convex optimization problems, the main requirement being that a strictly-feasible point is known. A feasible sequence of iterates is generated, which converges to within user-specified error of optimality. Feasibility is maintained with a line-search at each iteration, avoiding the need for orthogonal projections onto the feasible region … Read more