Abstract
The Liquid State Machine (LSM) is a method of computing with temporal neurons, which can be used amongst other things for classifying intrinsically temporal data directly unlike standard artificial neural networks. It has also been put forward as a natural model of certain kinds of brain functions. There are two results in this paper: (1) We show that the Liquid State Machines as normally defined cannot serve as a natural model for brain function. This is because they are very vulnerable to failures in parts of the model. This result is in contrast to work by Maass et al. which showed that these models are robust to noise in the input data. (2) We show that specifying certain kinds of topological constraints (such as "small world assumption"), which have been claimed are reasonably plausible biologically, can restore robustness in this sense to LSMs.
Original language | English |
---|---|
Pages (from-to) | 1597-1606 |
Number of pages | 10 |
Journal | Expert Systems with Applications |
Volume | 39 |
Issue number | 2 |
DOIs | |
State | Published - 1 Feb 2012 |
Keywords
- Liquid State Machine
- Machine learning
- Reservoir computing
- Robustness
- Small world topology
All Science Journal Classification (ASJC) codes
- General Engineering
- Artificial Intelligence
- Computer Science Applications