Randomized Block Coordinate Non-Monotone Gradient Method for a Class of Nonlinear Programming

In this paper we propose a randomized block coordinate non-monotone gradient (RBCNMG) method for minimizing the sum of a smooth (possibly nonconvex) function and a block-separable (possibly nonconvex nonsmooth) function. At each iteration, this method randomly picks a block according to any prescribed probability distribution and typically solves several associated proximal subproblems that usually have … Read more

A Second-Order Method for Strongly Convex L1-Regularization Problems

In this paper a robust second-order method is developed for the solution of strongly convex l1-regularized problems. The main aim is to make the proposed method as inexpensive as possible, while even difficult problems can be efficiently solved. The proposed method is a primal-dual Newton Conjugate Gradients (pdNCG) method. Convergence properties of pdNCG are studied … Read more

A Deterministic Rescaled Perceptron Algorithm

The perceptron algorithm is a simple iterative procedure for finding a point in a convex cone $F$. At each iteration, the algorithm only involves a query of a separation oracle for $F$ and a simple update on a trial solution. The perceptron algorithm is guaranteed to find a feasible point in $F$ after $\Oh(1/\tau_F^2)$ iterations, … Read more

On smoothness properties of optimal value functions at the boundary of their domain under complete convexity

This article studies continuity and directional differentiability properties of optimal value functions, in particular at boundary points of their domain. We extend and complement standard continuity results from W.W. Hogan, Point-to-set maps in mathematical programming, SIAM Review, Vol. 15 (1973), 591-603, for abstract feasible set mappings under complete convexity as well as standard differentiability results … Read more

Optimal parameter selection for the alternating direction method of multipliers (ADMM): quadratic problems

The alternating direction method of multipliers (ADMM) has emerged as a powerful technique for large-scale structured optimization. Despite many recent results on the convergence properties of ADMM, a quantitative characterization of the impact of the algorithm parameters on the convergence times of the method is still lacking. In this paper we find the optimal algorithm … Read more

One condition for all: solution uniqueness and robustness of l1-synthesis and l1-analysis minimizations

The l1-synthesis and l1-analysis models recover structured signals from their undersampled measurements. The solution of the former model is often a sparse sum of dictionary atoms, and that of the latter model often makes sparse correlations with dictionary atoms. This paper addresses the question: when can we trust these models to recover specific signals? We … Read more

KKT Reformulation and Necessary Conditions for Optimality in Nonsmooth Bilevel Optimization

For a long time, the bilevel programming problem has essentially been considered as a special case of mathematical programs with equilibrium constraints (MPECs), in particular when the so-called KKT reformulation is in question. Recently though, this widespread believe was shown to be false in general. In this paper, other aspects of the difference between both … Read more

A branch and bound approach for convex semi-infinite programming

In this paper we propose an efficient approach for globally solving a class of convex semi-infinite programming (SIP) problems. Under the objective function and constraints (w.r.t. the variables to be optimized) convexity assumption, and appropriate differentiability, we propose a branch and bound exchange type method for SIP. To compute a feasible point for a SIP … Read more

A variable fixing version of the two-block nonlinear constrained Gauss-Seidel algorithm for ℓ1-regularized least-squares

The problem of finding sparse solutions to underdetermined systems of linear equations is very common in many fields as e.g. in signal/image processing and statistics. A standard tool for dealing with sparse recovery is the ℓ1-regularized least-squares approach that has recently attracted the attention of many researchers. In this paper, we describe a new version … Read more