Understanding Deep Neural Networks with Rectified Linear Units
In this paper we investigate the family of functions representable by deep neural networks (DNN) with rectified linear units (ReLU). We give the first-ever polynomial time (in the size of data) algorithm to train a ReLU DNN with one hidden layer to {\em global optimality}. This follows from our complete characterization of the ReLU DNN … Read more