I’ve been taking a break from Semantic Web technology the last few weeks and reading books on evolutionary and cognitive psychology. One book that I recently re-read is Memory and The Computational Brain by Gallistel and King.
There are a very small number of books that made a major change in the way I think and that I always make sure to have in my library because I go back to them on a regular basis. Chomsky’s Syntactic Structures changed the way I think about language and philosophy. Kent Beck’s Extreme Programming Explained changed the way I develop software. Dawkins’ Selfish Gene gave me my first true understanding of the theory of evolution by natural selection. Memory and the Computational Brain is one of those books. It changed the way I think about psychology when I first read it about ten years ago and when I re-read it, it seemed even more profound than I remembered.
The essence of the book is that the proper model for human cognition is not only neural networks but a Turing Machine. A Neural Network is a (very complex) Finite State Automata (FSA). It is inherently more limited computationally than a Turing Machine because it can't represent historical data. This is the reason that Turing created the Turing Machine model for his paper on the Entscheidungsproblem [Turing 1936], he realized he needed a formalism more powerful than an FSA that included memory.
The specific kind of memory that Gallistel and King focus on is long term memory. Not muscle memory which is described by Hebbian conditioning (changing the conductance and connections of a neural network based on learning) [Jessell 2021] but episodic memory such as “who is the current president?” The authors make a strong case that there is no good neural net model for this type of memory and that the structure of such networks makes them unsuited to representing historical data in a scalable manner. Of course, with computers this issue never arises because Artificial Neural Nets (ANNs) just use the computer memory (e.g., arrays, databases, spreadsheets). However, many connectionists ignore this issue when it comes to the brain. The authors provide detailed analysis of the proposed neural network based solutions to episodic memory and show that such approaches could not scale to the memory requirements of Scrub Jays (which are known to cache food for the winter in tens of thousands of locations) let alone humans.
When one gives this a bit of thought it seems (at least to me) very intuitive. Of course, the output of one network can be the input for another but ultimately the input and output of all ANNs is some form of data which is virtually never represented as a neural network and it seems intuitive that the brain must function in a similar way, i.e., have a different mechanism for storing the input and output of neural nets that is analogous to addressable computer memory. The authors provide strong arguments for this view.
They hypothesize that the mechanism for storing episodic memory is some form of molecular code such as RNA or DNA. The advantage of such a code is that it is orders of magnitude more efficient than an FSA for storing data. Such a code would help to address one of the biggest issues with computing by neurons: computation by neurons is orders of magnitude slower than computation with computers. E.g., the interval between "spikes" (signals from other neurons) to the action potential (firing) of a neuron is on the order of half a millisecond and the time for a neuron to return to its base state after firing takes several milliseconds. A modern computer can execute at least one floating point instruction in .001 of a microsecond (a microsecond of course = .001 milliseconds). Yet the brain of a child can outperform computers on many tasks [Gallistel 2010].
The authors also emphasize the importance of symbolic computation as well as neural network computation. There has been fascinating work with these molecular models where researchers have been able to store information in them exactly as they would with bits in an electronic representation of memory. For one example see [Myers 2012].
I believe that “Old Fashioned AI” (e.g., the work of Newell and Simon) still has a role to play in Cognitive Science. I’m not downplaying the importance of machine learning (in computer science) and neural nets (in psychology). I’m just saying that I think it is obvious that there has to be more to cognition than neural nets just as the solution to many of the most complex AI problems (e.g., IBM’s Watson [Ferrucci 2012], autonomous vehicles) often involves a combination of machine learning and symbolic AI.
This book lays out the case for why that must be so and why the likely solution probably includes representing symbolic data in a molecular code. Anyone interested in psychology, whether cognitive, evolutionary, neuropsychology, or behaviorism should read this book. Even if you completely disagree with the authors, I think you will find the questions they raise fascinating and their elucidation of concepts from computer science such as Shannon’s Information Theory and the theory of computation extremely clear as is their description of how such concepts are essential to a theory of human cognition.
Addendum: I recently found the following paper by Gallistel where he describes his hypothesis of a molecular basis for memory in some detail and in light of more current research: The Physical Basis of Memory. Also, someone just sent me the following paper. It is fascinating. It describes how RNA and a system similar to RNA copying and recombination when cells divide can provide the computational power of a Turing Machine: An RNA Model for Universal Computation.
Bibliography
D. A. Ferrucci, "Introduction to “This is Watson”," in IBM Journal of Research and Development, vol. 56, no. 3.4, pp. 1:1-1:15, May-June 2012, doi: 10.1147/JRD.2012.2184356.
Gallistel, C. R. (2010). Memory and the Computational Brain: Why Cognitive Science Will Transform Neuroscience. Chapter 10. With Adam Phillip King. Blackwell Publishing.
J. E. Hopcroft, R. Motwani, and J. D. Ullman, Introduction to Automata Theory, Languages, and Computation. Reading Massachusetts: Addison Wesley, 2014.
T. M. Jessell, S. A. Siegelbaum, and E. R. Kandel, Principles of Neural Science, Sixth Edition. McGraw-Hill Education / Medical, 2021.
Myers, Andrew (2012) Totally RAD: Bioengineers create rewritable digital data storage in DNA. May 21, 2012. Available at: https://engineering.stanford.edu/magazine/article/totally-rad-bioengineers-create-rewritable-digital-data-storage-dna Accessed on: 9/21/2022.
A. Turing, “On Computable Numbers, with an Application to the Entscheidungsproblem (1936).” The Essential Turing, 2004, doi: 10.1093/oso/9780198250791.003.0005.
Comments