Randomness has always been present in one or other form in Machine Learning (ML) models. The last few years have seen a change of role in the use of randomness, which is no longer a specific and accessory improvement in very particular aspects of a model, but the main theoretical basis that supports some ML methods, e.g., the well-known random forests. In the Neural Network (NN) area, since its origins, randomness gave rise to a rich set of models, which have been recently exploited especially for efficiency aims. However, the bias induced by the use NN with random weights deserves further analysis, especially in the novel advances in the fields of deep NNs, dynamical systems (Recurrent NN), and NNs for learning in structured domains.
Randomized Machine Learning Approaches: Recent Developments and Challenges
GALLICCHIO, CLAUDIO;MICHELI, ALESSIO;
2017-01-01
Abstract
Randomness has always been present in one or other form in Machine Learning (ML) models. The last few years have seen a change of role in the use of randomness, which is no longer a specific and accessory improvement in very particular aspects of a model, but the main theoretical basis that supports some ML methods, e.g., the well-known random forests. In the Neural Network (NN) area, since its origins, randomness gave rise to a rich set of models, which have been recently exploited especially for efficiency aims. However, the bias induced by the use NN with random weights deserves further analysis, especially in the novel advances in the fields of deep NNs, dynamical systems (Recurrent NN), and NNs for learning in structured domains.I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.