Philosophers and brain researchers have had many brain and mind questions elude them. For example, why is it that when we hear a song, we perceive a melody, and not a series of individual tones? We don’t hear 1000 hz, followed by 2700 hz, then 1800 hz, etc. Instead, we hear a tune. Further, why is there a subjective experience in our perceptions? Why does one person experience a song one way, and another person a different way? When we hear a song, is the event going on nothing more than a series of chemicals in the brain reacting to sound frequencies, or is there something more, such as a unified experience that can be qualified as better or worse?
Here’s another way to try to explain the problem. John Searle devised an illustration that has come to be known as The Chinese Room. Here is Searle’s summary of the illustration:
Imagine a native English speaker who knows no Chinese locked in a room full of boxes of Chinese symbols (a data base) together with a book of instructions for manipulating the symbols (the program). Imagine that people outside the room send in other Chinese symbols which, unknown to the person in the room, are questions in Chinese (the input). And imagine that by following the instructions in the program the man in the room is able to pass out Chinese symbols which are correct answers to the questions (the output). The program enables the person in the room to pass the Turing Test for understanding Chinese but he does not understand a word of Chinese.
In the Chinese Room, Searle seems to demonstrate to us that thinking and experiencing is more than mere mental processing. A computer can translate language, several online translators show that. But the computer doesn’t really understand, does not have consciousness, does not think of itself as an “I.” The person in the Chinese Room can match symbols and correctly follow the rules, but have no idea of whether the symbols are Chinese or Martian, or know whether he is answering questions about cooking or car repair. The person is correctly answering the questions, but doesn’t really understand the meaning. Our universal experience of thinking tells us that there is something more to answering questions than merely following programming rules. A computer can follow programming rules, but it’s not thinking in a human sense of the term. A supercomputer does not truly have self-consciousness, does not think of itself as an “I.”
Searle’s Chinese Room created a great deal of controversy. But with this simple illustration, he appears to have effectively disproved the view of the brain as being only matter and energy. Searle was only trying to disprove the view of the brain as being a fancy computer, while trying to maintain a purely material universe — which are code words for ‘leaving God out of the picture.’
Another example is the voice-recognition software on my phone. It is designed to help find restaurants, movies, and businesses near me, and does this rather well. But if I ask “What is the meaning of life?” It talks back and says “I find it strange that you would ask that of an inanimate object” or “I can’t answer that now, but give me some time to write a very long play in which nothing happens.”
This machine’s answers remind me of several things that would be the case if humans were only made of matter and energy, and had no soul. The problem of consciousness leads us to recognize that this machine is not truly understanding me. The computer program does not truly understand the jokes, nor does it understand the meaning and significance of its answers. It would be immoral of me to call a human an idiot for no reason, but if I call the machine an idiot, the machine has no context for morality. It merely responds how it has been programmed. If humans were merely complex machines, calling a human an idiot would generate no more hurt feelings than a supercomputer.
Yet we readily recognize that the machine does not truly understand, does not have emotions, and cannot actually be insulted or be thankful. When the software says “I find it strange that you would ask that of an inanimate object” we know that the machine does not actually find anything strange, but it is merely following a set of rules programmed in by the creator. But humans can find things strange. The human mind is not merely more complex than my phone, it is fundamentally different. Humans have a sense of morality that is not based in programmed logic. We have understanding that is fundamentally different. Humans are more than complex machines. Our mental faculties are more than the working of our brain.
Those who believe that only natural processes exist try to find ways of explaining the problem of consciousness as a result of pure chemical and biological action, totally explained by laws of chemistry & energy without God or a human soul. Those of us who are theists try to show that it’s the result of being made in God’s image. Admittedly, neither side has hard proof from pure observation. But simple illustrations like the Chinese Room seem to have effectively disproved the popular notion that humans are nothing but matter and energy, nothing but a fancy biological computer. Philosophers knew this for some time, having wrestled with the issue of how matter could become self-aware all by itself. It cannot.
The problem of consciousness does not prove God exists, but provides an extremely strong argument against atheism. It is much more reasonable to conclude that God created us in His image. I agree with what is taught in the scriptures: We are fearfully and wonderfully made.