One of the first classes I audited at UC Berkeley quite a while ago was a Philosophy of Mind seminar led by John Searle. We read Thomas Nagel's What is it like to be a Bat? and it got me thinking about computer science. Nagel made me realize that we computer scientists are missing out on an essential aspect of what it means to be a computer. We know that most computers get input from the outside world through keyboards, cameras, and microphones. We know that they represent that world via objects, databases, logic, and eventually collections of bits. Clearly this is far different than our human methods for perceiving and representing the world. As Nagel says about bats so we must say about computers:
there is no reason to suppose that it is subjectively like anything we can experience or imagine. This appears to create difficulties for the notion of what it is like to be a [computer]. We must consider whether any method will permit us to extrapolate to the inner life of the [computer] from our own case, and if not what alternative methods there may be for understanding the notion.
As is probably obvious I don't really think we need to do anything more to understand what it's like to be a computer. My point is that Nagel's argument for why we need to wonder what it is like to be a bat seem as insubstantial as my juxtaposition for the computer.
I think what is going on here is an example of what Chomsky describes [Chomsky 1996, 2008] as trivial questions such as "do submarines swim?" In English submarines don't swim in Japanese they do but the question is not considered a conundrum for marine biologists. It's simply a question of a language convention. So in English (at least so far) few people wonder "what it is like" to be a computer. But we do wonder what it is like to be a bat. It is common in literature for people to turn into bats and frogs. We have a common sense idea that identity is not necessarily tied to a human brain. But common sense and intuition are not science. They may be the starting point for science. So that should be how we evaluate Nagel's question: are there any actual scientific issues he is getting at?
One of his primary criticisms is that consciousness can't be studied by a "materialist" or "physicalist" approach. I agree that a strictly materialist approach to studying consciousness won't work. However, not for the reasons that Nagel advocates. As Chomsky points out [Chomsky 2012], the mind-body distinction ceased to makes sense when Newton destroyed the mechanistic worldview on which it was based. Even more so in the modern world where the fundamental building blocks of "matter" are not sub-microscopic particles but wave functions.
Or consider fields such as computer science or computational linguistics. The concepts we deal with are grammars, languages, transformations, logic, state machines, Turing machines, ontologies, interfaces, etc. These aren't material except in the mundane sense that they can describe things and processes in the real world. However, they aren't materialistic concepts about electrical currents on silicon. Indeed most of those concepts can be implemented in highly diverse ways. A state machine can describe a software program or the call-response language of various mammals [Hauser 2003]. Several years ago I saw a fascinating paper presented by researchers at Stanford [Myers 2012] who showed that they could use RNA to store information exactly as one would store to a computer. They demonstrated this by showing how the PDF for their own paper was stored and retrieved via RNA in their lab. These examples show that Nagel's view of materialism is out dated and not relevant to what most modern people who study computation and cognition are doing. The modern sciences of cognition are "materialistic" only in the most trivial sense.
Now let us consider Nagel's emphasis on "reduction". How can we possibly even begin to think about reducing a scientific theory of mind to biological concepts when we don't yet have a mature scientific theory of mind? As Chomsky points out [Chomsky 2002] we can't even map neural correlates of consciousness for animals whose behavior are several orders of magnitude less complex than humans such as bees. Why should we constrain scientists working on the far harder problem of human cognition that if they can't perform such a reduction their work is not worth doing?
This brings us to Nagel's general viewpoint on science and philosophy. He is in essence a science denier. If science leads to a conclusion that is uncomfortable then he prefers to reject the science. For example, in his book Mind and Cosmos [Nagel 2012] on pages 26-27 referring to materialistic and evolutionary theories he says: "but the explanations they propose are not re-assuring enough". A few pages later in Mind and Cosmos on page 29 he says: “Everything we believe, even the most far flung cosmological theories has to be based ultimately on common sense and on what is plainly undeniable”.
The goal of science is not to re-assure us or to validate our common sense intuitions. Indeed, the history of science shows that some of the most important discoveries were resisted because they challenged the current world view and made us re-evaluate the place of humans in the universe. People still resist Darwin because they find it "not re-assuring enough" to think that humans evolved from primates. The theory of quantum entanglement for example, is certainly not "based ultimately on common sense and on what is plainly undeniable”.
Based on the history of science I think it would be somewhat surprising if when we ultimately do have a mature scientific theory of mind it didn't make people feel somewhat uncomfortable by forcing us to rethink common sense notions of consciousness such as free will.
Finally, I wish to close with a quote from a rather unrelated text. When I was taking Searle's seminar I was also auditing a quite different class on philosophy of mathematics and for that class we read Frege's Foundations of Arithmetic. And I hope this doesn't seem overly harsh, I have great regard for Nagel, he is clearly a very influential philosopher, but as I was reading the introduction to Foundations I couldn't help but think of Nagel as I read the following:
If Frege goes too far... he is certainly on the side of the angels when he espouses as a model for philosophy the defense of objective scientific truth in matters of conceptual clarification. He is surely right to oppose the supine subjectivism that seems to think we can say whatever we want merely by articulating unargued opinions in the course of creating a literary creative writing exercise. That is not philosophy for Frege... [Jacquette 2007]
Amen brother.
Bibliography
Chomsky, Noam (1996) Language and Thought: Some Reflections on Venerable Themes: Excerpted from Powers and Prospects.
Chomsky, Noam (2002) On Nature and Language. p. 56.
Chomsky, Noam (2008) Chomsky and His Critics. p. 279.
Chomsky, Noam (2012) The machine, the ghost, and the limits of understanding: Newton's contribution to the study of Mind. Lecture at the University of Oslo.
Hauser, Marc and Mark Konishi (2003) The Design of Animal Communication.
Jacquette, Dale (2007). Introduction and Critical Commentary to Foundations of Arithmetic by Gottlob Frege.
Myers, Andrew (2012) Totally RAD: Bioengineers create rewritable digital data storage in DNA. Stanford press release. Note: this is not the research I saw presented which was over ten years ago and unfortunately I can't recall that specific paper but the concept here is the same.
Nagel, Thomas (2012) Mind and Cosmos.
Comments