Inspired by the numerical solution of ordinary differential equations, in this paper, we propose a novel Reservoir Computing (RC) model, called the Euler State Network (EuSN). The presented approach makes use of forward Euler discretization and antisymmetric recurrent matrices to design reservoir dynamics that are both stable and non-dissipative by construction. Our mathematical analysis shows that the resulting model is biased towards a unitary effective spectral radius and zero local Lyapunov exponents, intrinsically operating near the edge of stability. Experiments on long-term memory tasks show the clear superiority of the proposed approach over standard RC models in problems requiring effective propagation of input information over multiple time steps. Furthermore, results on time-series classification benchmarks indicate that EuSN can match (or even exceed) the accuracy of trainable Recurrent Neural Networks, while retaining the training efficiency of the RC family, resulting in up to ≈464-fold savings in computation time and ≈1750-fold savings in energy consumption. At the same time, our results on time-series modeling tasks show competitive results against standard RC when the architecture is complemented by direct input-readout connections.
Euler State Networks: Non-dissipative Reservoir Computing
Gallicchio C.
2024-01-01
Abstract
Inspired by the numerical solution of ordinary differential equations, in this paper, we propose a novel Reservoir Computing (RC) model, called the Euler State Network (EuSN). The presented approach makes use of forward Euler discretization and antisymmetric recurrent matrices to design reservoir dynamics that are both stable and non-dissipative by construction. Our mathematical analysis shows that the resulting model is biased towards a unitary effective spectral radius and zero local Lyapunov exponents, intrinsically operating near the edge of stability. Experiments on long-term memory tasks show the clear superiority of the proposed approach over standard RC models in problems requiring effective propagation of input information over multiple time steps. Furthermore, results on time-series classification benchmarks indicate that EuSN can match (or even exceed) the accuracy of trainable Recurrent Neural Networks, while retaining the training efficiency of the RC family, resulting in up to ≈464-fold savings in computation time and ≈1750-fold savings in energy consumption. At the same time, our results on time-series modeling tasks show competitive results against standard RC when the architecture is complemented by direct input-readout connections.I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.