The problem of approximation by piecewise affine functions has been
studied for several decades (least squares and uniform approximation). If
the location of switches from one affine piece to another (knots for univariate
approximation) is known the problem is convex and there are several
approaches to solve this problem. If the location of such switches is unknown,
the problem is complex even in the case of univariate approximation.
In its classical formulation, the number of affine pieces is restricted,
since it is proportional to the dimension of the corresponding optimisation
problems. On the other hand, the recent development of the theory
of neural networks, demonstrate that the functions can be efficiently approximated
by overparametrised neural networks (least squares-based loss
function). In this paper, we use Chebyshev (uniform) optimisation criteria
and compare the classical approximation approach (direct) and a convex
optimisation-based approach, where the number of affine pieces is large,
but smaller than it is in the case of overparametrised networks. This can
be seen as a step towards understanding of the optimisation background
behind overparametrised networks with uniform loss function.