Duality in convex stochastic optimization

This paper studies duality and optimality conditions in general convex stochastic optimization problems introduced by Rockafellar and Wets in \cite{rw76}. We derive an explicit dual problem in terms of two dual variables, one of which is the shadow price of information while the other one gives the marginal cost of a perturbation much like in … Read more

Generalizations of doubly nonnegative cones and their comparison

In this study, we examine the various extensions of the doubly nonnegative (DNN) cone, frequently used in completely positive programming (CPP) to achieve a tighter relaxation than the positive semidefinite cone. To provide tighter relaxation for generalized CPP (GCPP) than the positive semidefinite cone, inner-approximation hierarchies of the generalized copositive cone are exploited to obtain … Read more

A generalized block-iterative projection method for the common fixed point problem induced by cutters

The block-iterative projections (BIP) method of Aharoni and Censor [Block-iterative projection methods for parallel computation of solutions to convex feasibility problems, Linear Algebra and its Applications 120, (1989), 165-175] is an iterative process for finding asymptotically a point in the nonempty intersection of a family of closed convex subsets. It employs orthogonal projections onto the … Read more

Convexity and continuity of specific set-valued maps and their extremal value functions

In this paper, we study several classes of set-valued maps, which can be used in set-valued optimization and its applications, and their respective maximum and minimum value functions. The definitions of these maps are based on scalar-valued, vector-valued, and cone-valued maps. Moreover, we consider those extremal value functions which are obtained when optimizing linear functionals … Read more

Convergence Results for Primal-Dual Algorithms in the Presence of Adjoint Mismatch

Most optimization problems arising in imaging science involve high-dimensional linear operators and their adjoints. In the implementations of these operators, approximations may be introduced for various practical considerations (e.g., memory limitation, computational cost, convergence speed), leading to an adjoint mismatch. This occurs for the X-ray tomographic inverse problems found in Computed Tomography (CT), where the … Read more

A family of accelerated inexact augmented Lagrangian methods with applications to image restoration

In this paper, we focus on a class of convex optimization problems subject to equality or inequality constraints and have developed an Accelerated Inexact Augmented Lagrangian Method (AI-ALM). Different relative error criteria are designed to solve the subproblem of AI-ALM inexactly, and the popular used relaxation step is exploited to accelerate the convergence. By a … Read more

Spectral Projected Subgradient Method for Nonsmooth Convex Optimization Problems

We consider constrained optimization problems with a nonsmooth objective function in the form of mathematical expectation. The Sample Average Approximation (SAA) is used to estimate the objective function and variable sample size strategy is employed. The proposed algorithm combines an SAA subgradient with the spectral coefficient in order to provide a suitable direction which improves … Read more

Limits of eventual families of sets with application to algorithms for the common fixed point problem

We present an abstract framework for asymptotic analysis of convergence based on the notions of eventual families of sets that we define. A family of subsets of a given set is called here an “eventual family” if it is upper hereditary with respect to inclusion. We define accumulation points of eventual families in a Hausdorff … Read more

A line search based proximal stochastic gradient algorithm with dynamical variance reduction

Many optimization problems arising from machine learning applications can be cast as the minimization of the sum of two functions: the first one typically represents the expected risk, and in practice it is replaced by the empirical risk, and the other one imposes a priori information on the solution. Since in general the first term … Read more

Adaptive Third-Order Methods for Composite Convex Optimization

In this paper we propose third-order methods for composite convex optimization problems in which the smooth part is a three-times continuously differentiable function with Lipschitz continuous third-order derivatives. The methods are adaptive in the sense that they do not require the knowledge of the Lipschitz constant. Trial points are computed by the inexact minimization of … Read more