The Chinese Room Argument Against Artificial Intelligence

The Chinese Room is a refutation of the theory that computers can think and mental processes are just interactions between software and hardware. ‘Strong A.I. (Mind, Brains and Science, Searle, 28)’ as termed by John Searle, is the contemporary explanation of the human mental processes. It is a prevalent view amongst many computer scientists such as Herbert Simon and Alan Newell. Even the Turing Test is arguably there to provide an attainable benchmark for true artificial intelligence. However, the argument presented in the Chinese Room example has raised the benchmark for strong A.I. to a level of implausibility. Indeed, it creates a ceiling which digital computers and their algorithmic processes will never be able to break through. Computers as defined by the interaction of programs and hardware are shown to be insufficient for the process of thinking.

As a branch of functionalism, computer functionalism harnesses our current knowledge of a working computer and fits it into the functionalist theory. Functionalism is a complete reductionist theory, which states that the mental state is defined purely by its causal relation. It is best presented by Ramsey sentences, which is defined by quantifiers and variables that state there exists a perception ‘p’ that causes ‘x’ and ‘x’ plus ‘q’ causes ‘a’. A mental state is therefore, any x. For example, my eyes see that the room is dark (p) and this makes me believe that the room is dark (x). After which my belief and the fact I want a bright room (q), cause me to turn on the lights (a). My belief that the room is dark can be replaced by any belief, such as a bright room is better than a dark room, as long as it satisfies the conditions. Therefore, functionalism does not simply rely upon observable dispositions as with behaviorism, nor does it try to map specifics as with the type-type identity theory. The brain and its mental states simply acts as mediator of input and output. More specifically under computer functionalism, the brain is the hardware and the mind is the algorithm program and thus mental states are mere computational processes.

Instinct would automatically reject the functionalist approach for the absence of consciousness, or the ‘qualia’, in its theory. Indeed, it is that instinct that still makes Cartesian dualism such popular idea amongst the uninitiated. However, the Chinese Room parable completely circumvents the seemingly vague substance, ‘qualia’, and uses the syntactical structure of which the computer is built upon to refute the concept of strong A.I. The setup of the Chinese room includes a person and a list of instructions for the person to follow whenever they receive Chinese symbols from the outside. These instructions tell the person how to respond with by manipulate the Chinese symbols in terms of their syntax. Suppose the input are in forms of questions, the person inside the Chinese room is following the direction and passing out symbols in the form of an answer that is comparable to a native Chinese speaker. The question is, does this constitute an understanding of Chinese on the part of the person in the room? Functionalists say it does, but the analysis of the Chinese room proves otherwise.

In the Chinese room, the symbols being passed out are merely syntactical. The program manipulates symbols only. Or as Searle states in more succinct terms in Mind, Brain, and Behavior, “They have no semantic content; they are not about anything. They have to be specified purely in terms of their formal or syntactical structure (31).” Indeed, another example of the discrepancy between syntax and understanding lies within the phonological study of linguistics. If one takes in all the rules involved in word composition, it is entirely possible to make a random word that is phonologically, and thus syntactically, correct. A word such as ‘frall’ is a possible English word, however, it lacks ‘content’ and does not mean anything. Even if it is an actual word such as ‘fall’, the word itself has no meaning until semantics is applied. Likewise, a computer defined as such, can and does follow its specified program to generate the appropriate syntactical response, but it will never be able to give its syntax ‘content’ and will never be able to understand what its processing. The fact that it would seem to output relevant data is only because people perceive it as relevant data. If the person outside of the Chinese Room does not understand Chinese then the data coming from the room is as irrelevant as the word ‘frall’. In fact, everything that comes out of the Chinese Room is irrelevant except to the person that can give the output the ‘content’ it lacks. This is different than when people communicate with each other, where semantics is represented by syntax. Syntax is as different from semantics as behavior to understanding.

Searle states, “Those features [consciousness, thoughts, feelings, emotions], by definition, the computer is unable to duplicate however power may be its ability to simulate (Minds, Brains, and Science, Searle, 37).” A computer maybe able to simulate understanding but it is incapable of actual understanding. Mind as defined by Searle means, “âÂ?¦the sequence of thoughts, feelings and experiencesâÂ?¦.(Minds, Brains and Science, Searle 10).” Since the process of understanding is synonymous with thoughts, it can be concluded that a computer cannot have a mind. Behavior can also be mimicked, but the understanding and causation behind behavior cannot be duplicated. Behavior in itself is unreliable and a faulty indicator of understanding. If one pretends to understand Chinese, does that behavior indicate understanding? Vice versa, if the person in the Chinese room did understand Chinese and yet choose to return gibberish, how would that behavior factor in? Because someone’s behavior displays their competence in applying syntactically based rules does not necessarily mean they understand. Once again, content is the key. Behavior is to syntax as mind is to semantics.

Understanding comes from knowledge of semantics. Granted understanding is represented by behavior, but the two are not equal. Thus the behavior as given by the person in the Chinese room is insufficient to indicate understanding. Likewise, the computer as defined can never gain the ability to think.

Leave a Reply

Your email address will not be published. Required fields are marked *


8 + eight =