On a Frank-Wolfe Approach for Abs-smooth Functions

We propose an algorithm which appears to be the first bridge between the fields of conditional
gradient methods and abs-smooth optimization. Our problem setting is motivated by various
applications that lead to nonsmoothness, such as $\ell_1$ regularization, phase retrieval problems, or 
ReLU activation in machine learning. To handle the nonsmoothness in our problem, we propose a
generalization to the traditional Frank-Wolfe gap and prove that first-order minimality is achieved 
when it vanishes. We derive a convergence rate for our algorithm which is {\em identical} to the
smooth case. Although our algorithm necessitates the solution of a subproblem which is more challenging 
than the smooth case, we provide an efficient numerical method for its partial solution, and we identify 
several applications where our approach fully solves the subproblem. Numerical and theoretical 
convergence is demonstrated, yielding several conjectures.

Article

Download

View PDF