The Problem of Conciousness

Philosophers and brain researchers have had many brain and mind questions elude them. For example, why is it that when we hear a song, we perceive a melody, and not a series of individual tones? We don’t hear 1000 hz, followed by 2700 hz, then 1800 hz, etc. Instead, we hear a tune. Further, why is there a subjective experience in our perceptions? Why does one person experience a song one way, and another person a different way? When we hear a song, is the event going on nothing more than a series of chemicals in the brain reacting to sound frequencies, or is there something more, such as a unified experience that can be qualified as better or worse?

Here’s another way to try to explain the problem. John Searle devised an illustration that has come to be known as The Chinese Room. Here is Searle’s summary of the illustration:

Imagine a native English speaker who knows no Chinese locked in a room full of boxes of Chinese symbols (a data base) together with a book of instructions for manipulating the symbols (the program). Imagine that people outside the room send in other Chinese symbols which, unknown to the person in the room, are questions in Chinese (the input). And imagine that by following the instructions in the program the man in the room is able to pass out Chinese symbols which are correct answers to the questions (the output). The program enables the person in the room to pass the Turing Test for understanding Chinese but he does not understand a word of Chinese.

In the Chinese Room, Searle seems to demonstrate to us that thinking and experiencing is more than mere mental processing. A computer can translate language, several online translators show that. But the computer doesn’t really understand, does not have consciousness, does not think of itself as an “I.” The person in the Chinese Room can match symbols and correctly follow the rules, but have no idea of whether the symbols are Chinese or Martian, or know whether he is answering questions about cooking or car repair. The person is correctly answering the questions, but doesn’t really understand the meaning. Our universal experience of thinking tells us that there is something more to answering questions than merely following programming rules. A computer can follow programming rules, but it’s not thinking in a human sense of the term. A supercomputer does not truly have self-consciousness, does not think of itself as an “I.”

Searle’s Chinese Room created a great deal of controversy. But with this simple illustration, he appears to have effectively disproved the view of the brain as being only matter and energy. Searle was only trying to disprove the view of the brain as being a fancy computer, while trying to maintain a purely material universe — which are code words for ‘leaving God out of the picture.’

Another example is the voice-recognition software on my phone. It is designed to help find restaurants, movies, and businesses near me, and does this rather well. But if I ask “What is the meaning of life?” It talks back and says “I find it strange that you would ask that of an inanimate object” or “I can’t answer that now, but give me some time to write a very long play in which nothing happens.”

This machine’s answers remind me of several things that would be the case if humans were only made of matter and energy, and had no soul. The problem of consciousness leads us to recognize that this machine is not truly understanding me. The computer program does not truly understand the jokes, nor does it understand the meaning and significance of its answers. It would be immoral of me to call a human an idiot for no reason, but if I call the machine an idiot, the machine has no context for morality. It merely responds how it has been programmed. If humans were merely complex machines, calling a human an idiot would generate no more hurt feelings than a supercomputer.

Yet we readily recognize that the machine does not truly understand, does not have emotions, and cannot actually be insulted or be thankful. When the software says “I find it strange that you would ask that of an inanimate object” we know that the machine does not actually find anything strange, but it is merely following a set of rules programmed in by the creator. But humans can find things strange. The human mind is not merely more complex than my phone, it is fundamentally different. Humans have a sense of morality that is not based in programmed logic. We have understanding that is fundamentally different. Humans are more than complex machines. Our mental faculties are more than the working of our brain.

Those who believe that only natural processes exist try to find ways of explaining the problem of consciousness as a result of pure chemical and biological action, totally explained by laws of chemistry & energy without God or a human soul. Those of us who are theists try to show that it’s the result of being made in God’s image. Admittedly, neither side has hard proof from pure observation. But simple illustrations like the Chinese Room seem to have effectively disproved the popular notion that humans are nothing but matter and energy, nothing but a fancy biological computer. Philosophers knew this for some time, having wrestled with the issue of how matter could become self-aware all by itself. It cannot.

The problem of consciousness does not prove God exists, but provides an extremely strong argument against atheism. It is much more reasonable to conclude that God created us in His image. I agree with what is taught in the scriptures:  We are fearfully and wonderfully made.

About these ads

About humblesmith

Christian Apologist & Philosopher
This entry was posted in Apologetics, Philosophy. Bookmark the permalink.

3 Responses to The Problem of Conciousness

  1. David Yerle says:

    The Chinese room has been widely criticized by most neuroscientists and philosophers of mind, because it ignores the fact that a system of rules (implemented however you want it) does contain knowledge. A system of rules for “pretending to speak Chinese” is actually so complicated it would take a huge amount of processing power and data to implement. Guess which device has that processing and storage power: a brain.

  2. humblesmith says:

    I’m not necessarily defending John Searle, for he is ultimately trying to defend the mind as an emergent property, which I would deny, but nevertheless Searle has responded to this. He responds that he was not trying to merely show that it was possible for a computer to respond to questions, but that something else is needed. Mere processing power, no matter how strong, can simulate a mind to the point of duplicating the effects, but cannot duplicate the meaning. Searle was not questioning whether a system of rules was needed. Rather he was questioning whether such a system of rules could end in meaning. If the computer carries out the rules for translating Chinese, does it truly “understand?” The point of the Chinese Room is that the man is duplicating the translation accurately, but does not understand Chinese. The same is true with my phone, I can say to the Siri software “You are an idiot” and it will respond with a joke, in context. But the software does not truly understand what it is saying in the same way a human mind would.
    Searle states “If I do not understand Chinese solely on the basis of implementing a computer program for understanding Chinese, then neither does any other digital computer solely on that basis.” (Problem of Consciousness, p.11) He then gives the argument:

    1. Programs are entirely syntactical.
    2. Minds have semantics.
    3. Syntax is not the same as, nor by itself sufficient for, semantics.

    This is the crux of the issue. The naturalist/materialist can try to avoid the dilemma by saying semantics does not exist, that meaning is an illusion, but most people would have issue with this, since it would say life is meaningless to the core, and all my thoughts and writings along with it. Searle says that he was not trying to say that “machines cannot think” but that machines do not explain semantics, that syntax is not the same as semantics.

    Inherently we all know that we understand meaning and that our mind is more than a complex computer program.

  3. Debilis says:

    This is an excellent response to materialist thinking.

    Of course, it seems to be the pattern that, if something can’t be explained by materialism, the materialist tends to simply deny its existence. Alex Rosenberg does, actually, go so far as to deny that consciousness exists. Rather, I’d say anyone’s taking this denial seriously has more to do with the zeitgeist than reasonable argument.

    Really, it’s hard to believe that there could ever be an argument in favor of materialism as strong as the reductio ad absurdum that Rosenberg presents as scientific fact.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s