Asynchronous Iterations in Optimization: New Sequence Results and Sharper Algorithmic Guarantees

We introduce novel convergence results for asynchronous iterations that appear in the analysis of parallel and distributed optimization algorithms. The results are simple to apply and give explicit estimates for how the degree of asynchrony impacts the convergence rates of the iterates. Our results shorten, streamline and strengthen existing convergence proofs for several asynchronous optimization … Read more

The alternating simultaneous Halpern-Lions-Wittmann-Bauschke algorithm for finding the best approximation pair for two disjoint intersections of convex sets

Given two nonempty and disjoint intersections of closed and convex subsets, we look for a best approximation pair relative to them, i.e., a pair of points, one in each intersection, attaining the minimum distance between the disjoint intersections. We propose an iterative process based on projections onto the subsets which generate the intersections. The process … Read more

Numerical Methods for Convex Multistage Stochastic Optimization

Optimization problems involving sequential decisions in  a  stochastic environment    were studied  in  Stochastic Programming (SP), Stochastic Optimal Control  (SOC) and Markov Decision Processes (MDP). In this paper we mainly concentrate on SP and  SOC modelling   approaches. In these frameworks there are natural situations  when the considered problems are  convex. Classical approach to sequential optimization … Read more

Balancing Communication and Computation in Gradient Tracking Algorithms for Decentralized Optimization

Gradient tracking methods have emerged as one of the most popular approaches for solving decentralized optimization problems over networks. In this setting, each node in the network has a portion of the global objective function, and the goal is to collectively optimize this function. At every iteration, gradient tracking methods perform two operations (steps): (1) … Read more

Convergence Analysis on A Data-deriven Inexact Proximal-indefinite Stochastic ADMM

In this paper, we propose an Inexact Proximal-indefinite Stochastic ADMM (abbreviated as IPS-ADMM) to solve a class of separable convex optimization problems whose objective functions consist of two parts: one is an average of many smooth convex functions and the other is a convex but potentially nonsmooth function. The involved smooth subproblem is tackled by … Read more

A successive centralized circumcentered-reflection method for the convex feasibility problem

In this paper, we present a successive centralization process for the circumcentered-reflection scheme with several control sequences for solving the convex feasibility problem in Euclidean space. Assuming that a standard error bound holds, we prove the linear convergence of the method with the most violated constraint control sequence. Moreover, under additional smoothness assumptions on the … Read more

Projection free methods on product domains

Projection-free block-coordinate methods avoid high computational cost per iteration and at the same time exploit the particular problem structure of product domains. Frank-Wolfe-like approaches rank among the most popular ones of this type. However, as observed in the literature, there was a gap between the classical Frank-Wolfe theory and the block-coordinate case. Moreover, most of … Read more

Semi-Infinite Generalized Disjunctive and Mixed Integer Convex Programs with(out) Uncertainty

In this paper, we introduce semi-infinite generalized disjunctive programs that are defined by logical propositions along with disjunctions of sets of logical equations and infinite number of algebraic inequalities. We denote these programs by SIGDPs. For SIGDPs with linear and convex inequalities, we present new reformulations: semi-infinite mixed-binary/disjunctive linear programs and semi-infinite mixed-binary/disjunctive convex programs, … Read more

A Novel Stepsize for Gradient Descent Method

In this paper, we propose a novel stepsize for the classical gradient descent scheme to solve unconstrained nonlinear optimization problems. We are concerned with the convex and smooth objective without the globally Lipschitz gradient condition. Our new method just needs the locally Lipschitz gradient but still gets the rate $O(\frac{1}{k})$ of $f(x^k)-f_*$ at most. By … Read more

The Jordan algebraic structure of the rotated quadratic cone

In this paper, we look into the rotated quadratic cone and analyze its algebraic structure. We construct an algebra associated with this cone and show that this algebra is a Euclidean Jordan algebra (EJA) with a certain inner product. We also demonstrate some spectral and algebraic characteristics of this EJA. The rotated quadratic cone is … Read more