Convergence analysis on a data-driven inexact proximal-indefinite stochastic ADMM

In this paper, we propose an Inexact Proximal-indefinite Stochastic ADMM (abbreviated as IPS-ADMM) to solve a class of separable convex optimization problems whose objective functions consist of two parts: one is an average of many smooth convex functions and the other is a convex but potentially nonsmooth function. The involved smooth subproblem is tackled by … Read more

(ε-)Efficiency in Fractional Vector Optimization

The issue of characterizing completely efficient (Pareto) solutions to a fractional vector (multiobjective or multicriteria) minimization problem, where the involved functions are convex, has not been addressed previously. Thanks to an earlier characterization of weak efficiency in difference vector optimization by El Maghri, we get a vectorial necessary and sufficient condition given in terms of … Read more

On a Frank-Wolfe Approach for Abs-smooth Functions

We propose an algorithm which appears to be the first bridge between the fields of conditional gradient methods and abs-smooth optimization. Our problem setting is motivated by various applications that lead to nonsmoothness, such as $\ell_1$ regularization, phase retrieval problems, or ReLU activation in machine learning. To handle the nonsmoothness in our problem, we propose … Read more

A successive centralized circumcentered-reflection method for the convex feasibility problem

In this paper, we present a successive centralization process for the circumcentered-reflection scheme with several control sequences for solving the convex feasibility problem in Euclidean space. Assuming that a standard error bound holds, we prove the linear convergence of the method with the most violated constraint control sequence. Moreover, under additional smoothness assumptions on the … Read more

MGProx: A nonsmooth multigrid proximal gradient method with adaptive restriction for strongly convex optimization

We study the combination of proximal gradient descent with multigrid for solving a class of possibly nonsmooth strongly convex optimization problems. We propose a multigrid proximal gradient method called MG-Prox, which accelerates the proximal gradient method by multigrid, based on using hierarchical information of the optimization problem. MGProx applies a newly introduced adaptive restriction operator … Read more

Projection free methods on product domains

Projection-free block-coordinate methods avoid high computational cost per iteration and at the same time exploit the particular problem structure of product domains. Frank-Wolfe-like approaches rank among the most popular ones of this type. However, as observed in the literature, there was a gap between the classical Frank-Wolfe theory and the block-coordinate case. Moreover, most of … Read more

Cutting plane reusing methods for multiple dual optimizations

We consider solving a group of dual optimization problems that share a core structure: Every primal problem of the group is obtained by the right-hand side variation of constraints in the original primal problem, while the other core part of the original primal problem, such as the objective and the left-hand side of the constraints, … Read more

Semi-Infinite Generalized Disjunctive and Mixed Integer Convex Programs with(out) Uncertainty

In this paper, we introduce semi-infinite generalized disjunctive programs that are defined by logical propositions along with disjunctions of sets of logical equations and infinite number of algebraic inequalities. We denote these programs by SIGDPs. For SIGDPs with linear and convex inequalities, we present new reformulations: semi-infinite mixed-binary/disjunctive linear programs and semi-infinite mixed-binary/disjunctive convex programs, … Read more

On an iteratively reweighted linesearch based algorithm for nonconvex composite optimization

In this paper we propose a new algorithm for solving a class of nonsmooth nonconvex problems, which is obtained by combining the iteratively reweighted scheme with a finite number of forward–backward iterations based on a linesearch procedure. The new method overcomes some limitations of linesearch forward–backward methods, since it can be applied also to minimize … Read more

A Novel Stepsize for Gradient Descent Method

In this paper, we propose a novel stepsize for the classical gradient descent scheme to solve unconstrained nonlinear optimization problems. We are concerned with the convex and smooth objective without the globally Lipschitz gradient condition. Our new method just needs the locally Lipschitz gradient but still gets the rate $O(\frac{1}{k})$ of $f(x^k)-f_*$ at most. By … Read more