On the pointwise iteration-complexity of a dynamic regularized ADMM with over-relaxation stepsize

In this paper, we extend the improved pointwise iteration-complexity result of a dynamic regularized alternating direction method of multipliers (ADMM) for a new stepsize domain. In this complexity analysis, the stepsize parameter can even be chosen in the interval $(0,2)$ instead of interval $(0,(1+\sqrt{5})/2)$. As usual, our analysis is established by interpreting this ADMM variant … Read more

ADMM for monotone operators: convergence analysis and rates

We propose in this paper a unifying scheme for several algorithms from the literature dedicated to the solving of monotone inclusion problems involving compositions with linear continuous operators in infinite dimensional Hilbert spaces. We show that a number of primal-dual algorithms for monotone inclusions and also the classical ADMM numerical scheme for convex optimization problems, … Read more

From Data to Decisions: Distributionally Robust Optimization is Optimal

We study stochastic programs where the decision-maker cannot observe the distribution of the exogenous uncertainties but has access to a finite set of independent samples from this distribution. In this setting, the goal is to find a procedure that transforms the data to an estimate of the expected cost function under the unknown data-generating distribution, … Read more

Radial Subgradient Descent

We present a subgradient method for minimizing non-smooth, non-Lipschitz convex optimization problems. The only structure assumed is that a strictly feasible point is known. We extend the work of Renegar [1] by taking a different perspective, leading to an algorithm which is conceptually more natural, has notably improved convergence rates, and for which the analysis … Read more

On Relaxation of Some Customized Proximal Point Algorithms for Convex Minimization: From Variational Inequality Perspective

The proximal point algorithm (PPA) is a fundamental method for convex programming. When PPA applied to solve linearly constrained convex problems, we may prefer to choose an appropriate metric matrix to define the proximal regularization, so that the computational burden of the resulted PPA can be reduced, and in most cases, even admit closed form … Read more

A randomized method for smooth convex minimization, motivated by probability maximization

We propose a randomized gradient method – or a randomized cutting-plane method from a dual viewpoint. From the primal viewpoint, our method bears a resemblance to the stochastic approximation family. But in contrast to stochastic approximation, the present method builds a model problem. Citation Kecskemet College, Pallasz Athene University. Izsaki ut 10, 6000 Kecskemet, Hungary; … Read more

Symmetric ADMM with Positive-Indefinite Proximal Regularization for Linearly Constrained Convex Optimization

The proximal ADMM which adds proximal regularizations to ADMM’s subproblems is a popular and useful method for linearly constrained separable convex problems, especially its linearized case. A well-known requirement on guaranteeing the convergence of the method in the literature is that the proximal regularization must be positive semidefinite. Recently it was shown by He et … Read more

The Fastest Known Globally Convergent First-Order Method for Minimizing Strongly Convex Functions

We design and analyze a novel gradient-based algorithm for unconstrained convex optimization. When the objective function is $m$-strongly convex and its gradient is $L$-Lipschitz continuous, the iterates and function values converge linearly to the optimum at rates $\rho$ and $\rho^2$, respectively, where $\rho = 1-\sqrt{m/L}$. These are the fastest known guaranteed linear convergence rates for … Read more

Foundations of gauge and perspective duality

Common numerical methods for constrained convex optimization are predicated on efficiently computing nearest points to the feasible region. The presence of a design matrix in the constraints yields feasible regions with more complex geometries. When the functional components are gauges, there is an equivalent optimization problem—the gauge dual– where the matrix appears only in the … Read more

From Infinite to Finite Programs: Explicit Error Bounds with Applications to Approximate Dynamic Programming

We consider linear programming (LP) problems in infinite dimensional spaces that are in general computationally intractable. Under suitable assumptions, we develop an approximation bridge from the infinite-dimensional LP to tractable finite convex programs in which the performance of the approximation is quantified explicitly. To this end, we adopt the recent developments in two areas of … Read more