Academia.eduAcademia.edu

Harnessing neural networks: A random matrix approach

2017, 2017 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)

Abstract

This article proposes an original approach to the performance understanding of large dimensional neural networks. In this preliminary study, we study a single hidden layer feed-forward network with random input connections (also called extreme learning machine) which performs a simple regression task. By means of a new random matrix result, we prove that, as the size and cardinality of the input data and the number of neurons grow large, the network performance is asymptotically deterministic. This entails a better comprehension of the effects of the hyper-parameters (activation function, number of neurons, etc.) under this simple setting, thereby paving the path to the harnessing of more involved structures.