On a Frank-Wolfe Approach for Abs-smooth Functions

We propose an algorithm which appears to be the first bridge between the fields of conditional gradient methods and abs-smooth optimization. Our problem setting is motivated by various applications that lead to nonsmoothness, such as $\ell_1$ regularization, phase retrieval problems, or ReLU activation in machine learning. To handle the nonsmoothness in our problem, we propose … Read more

A successive centralized circumcentered-reflection method for the convex feasibility problem

In this paper, we present a successive centralization process for the circumcentered-reflection scheme with several control sequences for solving the convex feasibility problem in Euclidean space. Assuming that a standard error bound holds, we prove the linear convergence of the method with the most violated constraint control sequence. Moreover, under additional smoothness assumptions on the … Read more

MGProx: A nonsmooth multigrid proximal gradient method with adaptive restriction for strongly convex optimization

We study the combination of proximal gradient descent with multigrid for solving a class of possibly nonsmooth strongly convex optimization problems. We propose a multigrid proximal gradient method called MG-Prox, which accelerates the proximal gradient method by multigrid, based on using hierarchical information of the optimization problem. MGProx applies a newly introduced adaptive restriction operator … Read more

Projection free methods on product domains

Projection-free block-coordinate methods avoid high computational cost per iteration and at the same time exploit the particular problem structure of product domains. Frank-Wolfe-like approaches rank among the most popular ones of this type. However, as observed in the literature, there was a gap between the classical Frank-Wolfe theory and the block-coordinate case. Moreover, most of … Read more

Cutting plane reusing methods for multiple dual optimizations

We consider solving a group of dual optimization problems that share a core structure: Every primal problem of the group is obtained by the right-hand side variation of constraints in the original primal problem, while the other core part of the original primal problem, such as the objective and the left-hand side of the constraints, … Read more

Semi-Infinite Generalized Disjunctive and Mixed Integer Convex Programs with(out) Uncertainty

In this paper, we introduce semi-infinite generalized disjunctive programs that are defined by logical propositions along with disjunctions of sets of logical equations and infinite number of algebraic inequalities. We denote these programs by SIGDPs. For SIGDPs with linear and convex inequalities, we present new reformulations: semi-infinite mixed-binary/disjunctive linear programs and semi-infinite mixed-binary/disjunctive convex programs, … Read more

On an iteratively reweighted linesearch based algorithm for nonconvex composite optimization

In this paper we propose a new algorithm for solving a class of nonsmooth nonconvex problems, which is obtained by combining the iteratively reweighted scheme with a finite number of forward–backward iterations based on a linesearch procedure. The new method overcomes some limitations of linesearch forward–backward methods, since it can be applied also to minimize … Read more

A Novel Stepsize for Gradient Descent Method

In this paper, we propose a novel stepsize for the classical gradient descent scheme to solve unconstrained nonlinear optimization problems. We are concerned with the convex and smooth objective without the globally Lipschitz gradient condition. Our new method just needs the locally Lipschitz gradient but still gets the rate $O(\frac{1}{k})$ of $f(x^k)-f_*$ at most. By … Read more

Barzilai-Borwein-like rules in proximal gradient schemes for ℓ1−regularized problems

We propose a novel steplength selection rule in proximal gradient methods for minimizing the sum of a differentiable function plus an ℓ1-norm penalty term. The proposed rule modifies one of the classical Barzilai-Borwein steplength, extending analogous results obtained in the context of gradient projection methods for constrained optimization. We analyze the spectral properties of the … Read more

A Semismooth Conjugate Gradients Method — Theoretical Analysis

In large scale applications, deterministic and stochastic variants of Cauchy’s steepest descent method are widely used for the minimization of objectives that are only piecewise smooth. In this paper we analyse a  deterministic descent method based on the generalization of rescaled conjugate gradients proposed by Philip Wolfe in 1975 for objectives that are convex. Without … Read more