Monotonicity and Complexity of Multistage Stochastic Variational Inequalities

In this paper, we consider multistage stochastic variational inequalities (MSVIs). First, we give multistage stochastic programs and multistage multi-player noncooperative game problems as source problems. After that, we derive the monotonicity properties of MSVIs under less restrictive conditions. Finally, the polynomial rate of convergence with respect to sample sizes between the original problem and its … Read more

An Adaptive Patch Approximation Algorithm for Bicriteria Convex Mixed Integer problems

Pareto frontiers of bicriteria continuous convex problems can be efficiently computed and optimal theoretical performance bounds have been established. In the case of bicriteria mixed-integer problems, the approximation of the Pareto frontier becomes, however, significantly harder. In this paper, we propose a new algorithm for approximating the Pareto frontier of bicriteria mixed-integer programs with convex … Read more

Optimal Transport Based Distributionally Robust Optimization: Structural Properties and Iterative Schemes

We consider optimal transport based distributionally robust optimization (DRO) problems with locally strongly convex transport cost functions and affine decision rules. Under conventional convexity assumptions on the underlying loss function, we obtain structural results about the value function, the optimal policy, and the worst-case optimal transport adversarial model. These results expose a rich structure embedded … Read more

Global convergence of the Heavy-ball method for convex optimization

This paper establishes global convergence and provides global bounds of the convergence rate of the Heavy-ball method for convex optimization problems. When the objective function has Lipschitz-continuous gradient, we show that the Cesa ́ro average of the iterates converges to the optimum at a rate of $O(1/k)$ where k is the number of iterations. When … Read more

Error estimates for the Euler discretization of an optimal control problem with first-order state constraints

We study the error introduced in the solution of an optimal control problem with first order state constraints, for which the trajectories are approximated with a classical Euler scheme. We obtain order one approximation results in the $L^\infty$ norm (as opposed to the order 2/3 obtained in the literature). We assume either a strong second … Read more

Rate analysis of inexact dual first order methods: Application to distributed MPC for network systems

In this paper we propose two dual decomposition methods based on inexact dual gradient information for solving large-scale smooth convex optimization problems. The complicating constraints are moved into the cost using the Lagrange multipliers. The dual problem is solved by inexact first order methods based on approximate gradients and we prove sublinear rate of convergence … Read more

Greedy approximation in convex optimization

We study sparse approximate solutions to convex optimization problems. It is known that in many engineering applications researchers are interested in an approximate solution of an optimization problem as a linear combination of elements from a given system of elements. There is an increasing interest in building such sparse approximate solutions using different greedy-type algorithms. … Read more

Greedy expansions in convex optimization

This paper is a follow up to the previous author’s paper on convex optimization. In that paper we began the process of adjusting greedy-type algorithms from nonlinear approximation for finding sparse solutions of convex optimization problems. We modified there three the most popular in nonlinear approximation in Banach spaces greedy algorithms — Weak Chebyshev Greedy … Read more

A Note on the Behavior of the Randomized Kaczmarz Algorithm of Strohmer and Vershynin

In a recent paper by Strohmer and Vershynin (J. Fourier Anal. Appl. 15:262–278, 2009) a “randomized Kaczmarz algorithm” is proposed for solving consistent systems of linear equations {ai, x = bi }m i=1. In that algorithm the next equation to be used in an iterative Kaczmarz process is selected with a probability proportional to ai2. … Read more

Convergence Rate of Stochastic Gradient Search in the Case of Multiple and Non-Isolated Minima

The convergence rate of stochastic gradient search is analyzed in this paper. Using arguments based on differential geometry and Lojasiewicz inequalities, tight bounds on the convergence rate of general stochastic gradient algorithms are derived. As opposed to the existing results, the results presented in this paper allow the objective function to have multiple, non-isolated minima, … Read more