Searle S Chinese Room Essay Typer

John Rogers Searle (; born 31 July 1932) is an American philosopher. He is currently Willis S. and Marion Slusser Professor Emeritus of the Philosophy of Mind and Language and Professor of the Graduate School at the University of California, Berkeley. Widely noted for his contributions to the philosophy of language, philosophy of mind, and social philosophy, he began teaching at UC Berkeley in 1959.

As an undergraduate at the University of Wisconsin, Searle was secretary of "Students against Joseph McCarthy". He received all his university degrees, BA, MA, and DPhil, from the University of Oxford, where he held his first faculty positions. Later, at UC Berkeley, he became the first tenured professor to join the 1964–1965 Free Speech Movement. In the late 1980s, Searle challenged the restrictions of Berkeley's 1980 rent stabilization ordinance. Following what came to be known as the California Supreme Court's "Searle Decision" of 1990, Berkeley changed its rent control policy, leading to large rent increases between 1991 and 1994.

In 2000 Searle received the Jean Nicod Prize;[2] in 2004, the National Humanities Medal;[3] and in 2006, the Mind & Brain Prize. Searle's early work on speech acts, influenced by J. L. Austin and Ludwig Wittgenstein, helped establish his reputation. His notable concepts include the "Chinese room" argument against "strong" artificial intelligence. In March 2017 Searle was accused of sexual harassment.

Biography[edit]

Searle's father, G. W. Searle, an electrical engineer, was employed by AT&T Corporation, while his mother, Hester Beck Searle, was a physician.[citation needed]

Searle began his college education at the University of Wisconsin-Madison and in his junior year became a Rhodes Scholar at the University of Oxford, where he obtained all his university degrees, BA, MA, and DPhil. He received all his university degrees, BA, MA, and DPhil, from the University of Oxford, where he held his first faculty positions.[citation needed]

His first two faculty positions were at Oxford as Research Lecturer, and Lecturer and Tutor at Christ Church.[citation needed]

Politics[edit]

While an undergraduate at the University of Wisconsin, Searle was the secretary of "Students against Joseph McCarthy".[4] McCarthy was then the junior senator from Wisconsin. In 1959 Searle began teaching at Berkeley, and he was the first tenured professor to join the 1964–65 Free Speech Movement.[5] In 1969, while serving as chairman of the Academic Freedom Committee of the Academic Senate of the University of California,[6] he supported the university in its dispute with students over the People's Park. In The Campus War: A Sympathetic Look at the University in Agony (1971),[7] Searle investigates the causes behind the campus protests of the era. In it he declares that: "I have been attacked by both the House Un-American Activities Committee and ... several radical polemicists ... Stylistically, the attacks are interestingly similar. Both rely heavily on insinuation and innuendo, and both display a hatred – one might almost say terror – of close analysis and dissection of argument." He asserts that "My wife was threatened that I (and other members of the administration) would be assassinated or violently attacked."[4]

In the late 1980s, Searle, along with other landlords, petitioned Berkeley's rental board to raise the limits on how much he could charge tenants under the city's 1980 rent stabilization ordinance.[8] The rental board refused to consider Searle's petition and Searle filed suit, charging a violation of due process. In 1990, in what came to be known as the "Searle Decision", the California Supreme Court upheld Searle's argument in part and Berkeley changed its rent control policy, leading to large rent increases between 1991 and 1994. Searle was reported to see the issue as one of fundamental rights, being quoted as saying "The treatment of landlords in Berkeley is comparable to the treatment of blacks in the South...our rights have been massively violated and we are here to correct that injustice."[9] The court described the debate as a "morass of political invective, ad hominem attack, and policy argument".[10]

Shortly after the September 11 attacks, Searle wrote an article arguing that the attacks were a particular event in a long-term struggle against forces that are intractably opposed to the United States, and signaled support for a more aggressive neoconservativeinterventionistforeign policy. He called for the realization that the United States is in a more-or-less permanent state of war with these forces. Moreover, a probable course of action would be to deny terrorists the use of foreign territory from which to stage their attacks. Finally, he alluded to the long-term nature of the conflict and blamed the attacks on the lack of American resolve to deal forcefully with America's enemies over the past several decades.[11]

Sexual harassment allegations[edit]

In March 2017, Searle was the subject of sexual assault allegations. The Los Angeles Times reported: "A new lawsuit alleges that university officials failed to properly respond to complaints that John Searle, an 84-year-old renowned philosophy professor, sexually assaulted his 24-year-old research associate last July and cut her pay when she rejected his advances."[12][13] Berkeley's alleged protection of Searle was seen in some quarters as a recurrence of earlier problems in dealing with similar accusations against other staff.[14][15]

The suit, filed in a California court on March 21, 2017, seeks damages from both Searle and the Regents of the University of California as his employers. It also claims that Jennifer Hudin, the director of the John Searle Center for Social Ontology, where the complainant had been employed as an assistant to Searle, has stated that Searle "has had sexual relationships with his students and others in the past in exchange for academic, monetary or other benefits".[16] It was reported that "[i]n early March, Searle’s students learned he would no longer be teaching his undergraduate 'Philosophy of Mind' course. Beyond citing 'personal reasons,' university officials provided no explanation for Searle’s departure, according to a department source who asked to remain anonymous."[16]

After the lawsuit was made public, several previous allegations of sexual harassment by Searle were also revealed.[17]

Awards and recognitions[edit]

He has five honorary doctorate degrees from four different countries and is an honorary visiting professor at Tsing Hua University and East China Normal University. Searle is an atheist.[18]

In 2000 Searle received the Jean Nicod Prize;[2] in 2004, the National Humanities Medal;[3] and in 2006, the Mind & Brain Prize.

Philosophy[edit]

Speech acts[edit]

Searle's early work, which did a great deal to establish his reputation, was on speech acts. He attempted to synthesize ideas from many colleagues – including J. L. Austin (the "illocutionary act", from How To Do Things with Words), Ludwig Wittgenstein and G. C. J. Midgley (the distinction between regulative and constitutive rules) – with his own thesis that such acts are constituted by the rules of language. He also drew on the work of Paul Grice (the analysis of meaning as an attempt at being understood), Hare and Stenius (the distinction, concerning meaning, between illocutionary force and propositional content), P. F. Strawson, John Rawls and William Alston, who maintained that sentence meaning consists in sets of regulative rules requiring the speaker to perform the illocutionary act indicated by the sentence and that such acts involve the utterance of a sentence which (a) indicates that one performs the act; (b) means what one says; and (c) addresses an audience in the vicinity.

In his 1969 book Speech Acts, Searle sets out to combine all these elements to give his account of illocutionary acts. There he provides an analysis of what he considers the prototypical illocutionary act of promising and offers sets of semantical rules intended to represent the linguistic meaning of devices indicating further illocutionary act types. Among the concepts presented in the book is the distinction between the "illocutionary force" and the "propositional content" of an utterance. Searle does not precisely define the former as such, but rather introduces several possible illocutionary forces by example. According to Searle, the sentences...

  1. Sam smokes habitually.
  2. Does Sam smoke habitually?
  3. Sam, smoke habitually!
  4. Would that Sam smoked habitually!

...each indicate the same propositional content (Sam smoking habitually) but differ in the illocutionary force indicated (respectively, a statement, a question, a command and an expression of desire).[19]

According to a later account, which Searle presents in Intentionality (1983) and which differs in important ways from the one suggested in Speech Acts, illocutionary acts are characterised by their having "conditions of satisfaction" (an idea adopted from Strawson's 1971 paper "Meaning and Truth") and a "direction of fit" (an idea adopted from Elizabeth Anscombe). For example, the statement "John bought two candy bars" is satisfied if and only if it is true, i.e. John did buy two candy bars. By contrast, the command "John, buy two candy bars!" is satisfied if and only if John carries out the action of purchasing two candy bars. Searle refers to the first as having the "word-to-world" direction of fit, since the words are supposed to change to accurately represent the world, and the second as having the "world-to-word" direction of fit, since the world is supposed to change to match the words. (There is also the double direction of fit, in which the relationship goes both ways, and the null or zero direction of fit, in which it goes neither way because the propositional content is presupposed, as in "I'm sorry I ate John's candy bars.")

In Foundations of Illocutionary Logic[20] (1985, with Daniel Vanderveken), Searle prominently uses the notion of the "illocutionary point".[21]

Searle's speech-act theory has been challenged by several thinkers in a variety of ways. Collections of articles referring to Searle's account are found in Burkhardt 1990[22] and Lepore / van Gulick 1991.[23]

Searle–Derrida debate[edit]

See also: Limited Inc

In the early 1970s, Searle had a brief exchange with Jacques Derrida regarding speech-act theory. The exchange was characterized by a degree of mutual hostility between the philosophers, each of whom accused the other of having misunderstood his basic points.[24][citation needed] Searle was particularly hostile to Derrida's deconstructionist framework and much later refused to let his response to Derrida be printed along with Derrida's papers in the 1988 collection Limited Inc. Searle did not consider Derrida's approach to be legitimate philosophy or even intelligible writing and argued that he did not want to legitimize the deconstructionist point of view by dedicating any attention to it. Consequently, some critics[25] have considered the exchange to be a series of elaborate misunderstandings rather than a debate, while others[26] have seen either Derrida or Searle gaining the upper hand. The level of hostility can be seen from Searle's statement that "It would be a mistake to regard Derrida's discussion of Austin as a confrontation between two prominent philosophical traditions", to which Derrida replied that that sentence was "the only sentence of the 'reply' to which I can subscribe".[27] Commentators have frequently interpreted the exchange as a prominent example of a confrontation between analytical and continental philosophy.

The debate began in 1972, when, in his paper "Signature Event Context", Derrida analyzed J. L. Austin's theory of the illocutionary act. While sympathetic to Austin's departure from a purely denotational account of language to one that includes "force", Derrida was sceptical of the framework of normativity employed by Austin. He argued that Austin had missed the fact that any speech event is framed by a "structure of absence" (the words that are left unsaid due to contextual constraints) and by "iterability" (the constraints on what can be said, given by what has been said in the past). Derrida argued that the focus on intentionality in speech-act theory was misguided because intentionality is restricted to that which is already established as a possible intention. He also took issue with the way Austin had excluded the study of fiction, non-serious or "parasitic" speech, wondering whether this exclusion was because Austin had considered these speech genres governed by different structures of meaning, or simply due to a lack of interest.

In his brief reply to Derrida, "Reiterating the Differences: A Reply to Derrida", Searle argued that Derrida's critique was unwarranted because it assumed that Austin's theory attempted to give a full account of language and meaning when its aim was much narrower. Searle considered the omission of parasitic discourse forms to be justified by the narrow scope of Austin's inquiry.[28][29] Searle agreed with Derrida's proposal that intentionality presupposes iterability, but did not apply the same concept of intentionality used by Derrida, being unable or unwilling to engage with the continental conceptual apparatus.[26] This, in turn, caused Derrida to criticize Searle for not being sufficiently familiar with phenomenological perspectives on intentionality.[30] Searle also argued that Derrida's disagreement with Austin turned on his having misunderstood Austin's (and Peirce's) type–token distinction and his failure to understand Austin's concept of failure in relation to performativity. Some critics[30] have suggested that Searle, by being so grounded in the analytical tradition, was unable to engage with Derrida's continental phenomenological tradition and was at fault for the unsuccessful nature of the exchange.

Derrida, in his response to Searle ("a b c ..." in Limited Inc), ridiculed Searle's positions. Arguing that a clear sender of Searle's message could not be established, he suggested that Searle had formed with Austin a société à responsabilité limitée (a "limited liability company") due to the ways in which the ambiguities of authorship within Searle's reply circumvented the very speech act of his reply. Searle did not respond. Later in 1988, Derrida tried to review his position and his critiques of Austin and Searle, reiterating that he found the constant appeal to "normality" in the analytical tradition to be problematic.[26][31][32][33][34][35][36][37]

In the debate, Derrida praises Austin's work, but argues that he is wrong to banish what Austin calls "infelicities" from the "normal" operation of language. One "infelicity," for instance, occurs when it cannot be known whether a given speech act is "sincere" or "merely citational" (and therefore possibly ironic, etc.). Derrida argues that every iteration is necessarily "citational", due to the graphematic nature of speech and writing, and that language could not work at all without the ever-present and ineradicable possibility of such alternate readings. Derrida takes Searle to task for his attempt to get around this issue by grounding final authority in the speaker's inaccessible "intention". Derrida argues that intention cannot possibly govern how an iteration signifies, once it becomes hearable or readable. All speech acts borrow a language whose significance is determined by historical-linguistic context, and by the alternate possibilities that this context makes possible. This significance, Derrida argues, cannot be altered or governed by the whims of intention.

In 1995, Searle gave a brief reply to Derrida in The Construction of Social Reality. "Derrida, as far as I can tell, does not have an argument. He simply declares that there is nothing outside of texts (Il n'y a pas de 'hors-texte')." Then, in Limited Inc., Derrida "apparently takes it all back", claiming that he meant only "the banality that everything exists in some context or other!" Derrida and others like him present "an array of weak or even nonexistent arguments for a conclusion that seems preposterous".[38] In Of Grammatology (1967), Derrida claims that a text must not be interpreted by reference to anything "outside of language", which for him means "outside of writing in general". He adds: "There is nothing outside of the text [there is no outside-text; il n'y a pas de hors-texte]" (brackets in the translation).[39] This is a metaphor: un hors-texte is a bookbinding term, referring to a 'plate' bound among pages of text.[40] Searle cites Derrida's supplementary metaphor rather than his initial contention. However, whether Searle's objection is good against that contention is the point in debate.

Intentionality and the background[edit]

Searle defines intentionality as the power of minds to be about, to represent (see Correspondence theory of truth), or to stand for, things, properties and states of affairs in the world.[41] The nature of intentionality is an important part of discussions of Searle's "Philosophy of Mind". Searle emphasizes that the word ‘intentionality, (the part of the mind directed to/from/about objects and relations in the world independent of mind) should not be confused with the word ‘intensionality’ (the logical property of some sentences that do not pass the test of 'extensionality').[42] In Intentionality: An Essay in the Philosophy of Mind (1983), Searle applies certain elements of his account(s) of "illocutionary acts" to the investigation of intentionality. Searle also introduces a technical term the Background,[43] which, according to him, has been the source of much philosophical discussion ("though I have been arguing for this thesis for almost twenty years," Searle writes,[44] "many people whose opinions I respect still disagree with me about it"). He calls Background the set of abilities, capacities, tendencies, and dispositions that humans have and that are not themselves intentional states. Thus, when someone asks us to "cut the cake" we know to use a knife and when someone asks us to "cut the grass" we know to use a lawnmower (and not vice versa), even though the actual request did not include this detail. Searle sometimes supplements his reference to the Background with the concept of the Network, one's network of other beliefs, desires, and other intentional states necessary for any particular intentional state to make sense. Searle argues that the concept of a Background is similar to the concepts provided by several other thinkers, including Wittgenstein's private language argument ("the work of the later Wittgenstein is in large part about the Background"[45]) and Pierre Bourdieu's habitus.

To give an example, two chess players might be engaged in a bitter struggle at the board, but they share all sorts of Background presuppositions: that they will take turns to move, that no one else will intervene, that they are both playing to the same rules, that the fire alarm won't go off, that the board won't suddenly disintegrate, that their opponent won't magically turn into a grapefruit, and so on indefinitely. As most of these possibilities won't have occurred to either player,[46] Searle thinks the Background must be unconscious, though elements of it can be called to consciousness (if the fire alarm does go off, say).

In his debate with Derrida, Searle argued against Derrida's view that a statement can be disjoined from the original intentionality of its author, for example when no longer connected to the original author, while still being able to produce meaning. Searle maintained that even if one was to see a written statement with no knowledge of authorship it would still be impossible to escape the question of intentionality, because "a meaningful sentence is just a standing possibility of the (intentional) speech act". For Searle ascribing intentionality to a statement was a basic requirement for attributing it any meaning at all.[47][48]

Consciousness[edit]

Building upon his views about intentionality, Searle presents a view concerning consciousness in his book The Rediscovery of the Mind (1992). He argues that, starting with behaviorism (an early but influential scientific view, succeeded by many later accounts that Searle also dismisses), much of modern philosophy has tried to deny the existence of consciousness, with little success. In Intentionality, he parodies several alternative theories of consciousness by replacing their accounts of intentionality with comparable accounts of the hand:

No one would think of saying, for example, "Having a hand is just being disposed to certain sorts of behavior such as grasping" (manual behaviorism), or "Hands can be defined entirely in terms of their causes and effects" (manual functionalism), or "For a system to have a hand is just for it to be in a certain computer state with the right sorts of inputs and outputs" (manual Turing machine functionalism), or "Saying that a system has hands is just adopting a certain stance toward it" (the manual stance). (p. 263)

Searle argues that philosophy has been trapped by a false dichotomy: that, on the one hand, the world consists of nothing but objective particles in fields of force, but that yet, on the other hand, consciousness is clearly a subjective first-person experience.

Searle says simply that both are true: consciousness is a real subjective experience, caused by the physical processes of the brain. (A view which he suggests might be called biological naturalism.)

Ontological subjectivity[edit]

Searle has argued[49] that critics like Daniel Dennett, who (he claims) insist that discussing subjectivity is unscientific because science presupposes objectivity, are making a category error. Perhaps the goal of science is to establish and validate statements which are epistemically objective, (i.e., whose truth can be discovered and evaluated by any interested party), but are not necessarily ontologically objective.

Searle calls any value judgment epistemically subjective. Thus, "McKinley is prettier than Everest" is "epistemically subjective", whereas "McKinley is higher than Everest" is "epistemically objective." In other words, the latter statement is evaluable (in fact, falsifiable) by an understood ('background') criterion for mountain height, like 'the summit is so many meters above sea level'. No such criteria exist for prettiness.

Beyond this distinction, Searle thinks there are certain phenomena (including all conscious experiences) that are ontologically subjective, i.e. can only exist as subjective experience. For example, although it might be subjective or objective in the epistemic sense, a doctor's note that a patient suffers from back pain is an ontologically objective claim: it counts as a medical diagnosis only because the existence of back pain is "an objective fact of medical science".[50] The pain itself, however, is ontologically subjective: it is only experienced by the person having it.

Searle goes on to affirm that "where consciousness is concerned, the existence of the appearance is the reality".[51] His view that the epistemic and ontological senses of objective/subjective are cleanly separable is crucial to his self-proclaimed biological naturalism.

Artificial intelligence[edit]

See also: Chinese room and philosophy of artificial intelligence

A consequence of biological naturalism is that if we want to create a conscious being, we will have to duplicate whatever physical processes the brain goes through to cause consciousness. Searle thereby means to contradict what he calls "Strong AI", defined by the assumption that as soon as a certain kind of software is running on a computer, a conscious being is thereby created.[52]

In 1980, Searle presented the "Chinese room" argument, which purports to prove the falsity of strong AI.[53] Assume you do not speak Chinese and imagine yourself in a room with two slits, a book, and some scratch paper. Someone slides you some Chinese characters through the first slit, you follow the instructions in the book, transcribing characters as instructed onto the scratch paper, and slide the resulting sheet out the second slit. To people on the outside world, it appears the room speaks Chinese—they slide Chinese statements in one slit and get valid responses in return—yet you do not understand a word of Chinese. This suggests, according to Searle, that no computer can ever understand Chinese or English, because, as the thought experiment suggests, being able to 'translate' Chinese into English does not entail 'understanding' either Chinese or English: all which the person in the thought experiment, and hence a computer, is able to do is to execute certain syntactic manipulations.[54][55]

Stevan Harnad argues that Searle's "Strong AI" is really just another name for functionalism and computationalism, and that these positions are the real targets of his critique.[56] Functionalists argue that consciousness can be defined as a set of informational processes inside the brain. It follows that anything that carries out the same informational processes as a human is also conscious. Thus, if we wrote a computer program that was conscious, we could run that computer program on, say, a system of ping-pong balls and beer cups and the system would be equally conscious, because it was running the same information processes.

Searle argues that this is impossible, since consciousness is a physical property, like digestion or fire. No matter how good a simulation of digestion you build on the computer, it will not digest anything; no matter how well you simulate fire, nothing will get burnt. By contrast, informational processes are observer-relative: observers pick out certain patterns in the world and consider them information processes, but information processes are not things-in-the-world themselves. Since they do not exist at a physical level, Searle argues, they cannot have causal efficacy and thus cannot cause consciousness. There is no physical law, Searle insists, that can see the equivalence between a personal computer, a series of ping-pong balls and beer cans, and a pipe-and-water system all implementing the same program.[57]

Social reality[edit]

Searle extended his inquiries into observer-relative phenomena by trying to understand social reality. Searle begins by arguing collective intentionality (e.g. "we're going for a walk") is a distinct form of intentionality, not simply reducible to individual intentionality (e.g. "I'm going for a walk with him and I think he thinks he's going for a walk with me and he thinks I think I'm going for a walk with him and ...").

In The Construction of Social Reality (1995), Searle addresses the mystery of how social constructs like "baseball" or "money" can exist in a world consisting only of physical particles in fields of force. Adapting an idea by Elizabeth Anscombe in "On Brute Facts," Searle distinguishes between brute facts, like the height of a mountain, and institutional facts, like the score of a baseball game. Aiming at an explanation of social phenomena in terms of Anscombe's notion, he argues that society can be explained in terms of institutional facts, and institutional facts arise out of collective intentionality through constitutive rules with the logical form "X counts as Y in C". Thus, for instance, filling out a ballot counts as a vote in a polling place, getting so many votes counts as a victory in an election, getting a victory counts as being elected president in the presidential race, etc.

Many sociologists, however, do not see Searle's contributions to social theory as very significant. Neil Gross, for example, argues that Searle's views on society are more or less a reconstitution of the sociologist Émile Durkheim's theories of social facts, social institutions, collective representations, and the like. Searle's ideas are thus open to the same criticisms as Durkheim's.[58] Searle responded that Durkheim's work was worse than he had originally believed and, admitting he had not read much of Durkheim's work, said that, "Because Durkheim’s account seemed so impoverished I did not read any further in his work."[59]Steven Lukes, however, responded to Searle's response to Gross and argued point by point against the allegations that Searle makes against Durkheim, essentially upholding Gross' argument that Searle's work bears great resemblance to Durkheim's. Lukes attributes Searle's miscomprehension of Durkheim's work to the fact that Searle never read Durkheim.[60]

Searle–Lawson debate[edit]

In recent years, Searle's main interlocutor on issues of social ontology has been Tony Lawson. Although their accounts of social reality are similar, there are important differences. Lawson places emphasis on the notion of social totality whereas Searle prefers to refer to institutional facts. Furthermore, Searle believes that emergence implies causal reduction whereas Lawson argues that social totalities cannot be completed explained by the causal powers of their components. Searle also places language at the foundation of the construction of social reality while Lawson believes that community formation necessarily precedes the development of language and therefore there must be the possibility for non-linguistic social structure formation.[61][62][63] The debate is ongoing and takes place additionally through regular meetings of the Centre for Social Ontology at the University of California, Berkeley and the Cambridge Social Ontology Group at the University of Cambridge.[64]

Rationality[edit]

In Rationality in Action (2001), Searle argues that standard notions of rationality are badly flawed. According to what he calls the Classical Model, rationality is seen as something like a train track: you get on at one point with your beliefs and desires and the rules of rationality compel you all the way to a conclusion. Searle doubts this picture of rationality holds generally.

Searle briefly critiques one particular set of these rules: those of mathematical decision theory. He points out that its axioms require that anyone who valued a quarter and their life would, at some odds, bet their life for a quarter. Searle insists he would never take such a bet and believes that this stance is perfectly rational.

Most of his attack is directed against the common conception of rationality, which he believes is badly flawed. First, he argues that reasons don't cause you to do anything, because having sufficient reason wills (but doesn't force) you to do that thing. So in any decision situation we experience a gap between our reasons and our actions. For example, when we decide to vote, we do not simply determine that we care most about economic policy and that we prefer candidate Jones's economic policy. We also have to make an effort to cast our vote. Similarly, every time a guilty smoker lights a cigarette they are aware of succumbing to their craving, not merely of acting automatically as they do when they exhale. It is this gap that makes us think we have freedom of the will. Searle thinks whether we really have free will or not is an open question, but considers its absence highly unappealing because it makes the feeling of freedom of will an epiphenomenon, which is highly unlikely from the evolutionary point of view given its biological cost. He also says: " All rational activity presupposes free will ".[65]

Second, Searle believes we can rationally do things that don't result from our own desires. It is widely believed that one cannot derive an "ought" from an "is", i.e. that facts about how the world is can never tell you what you should do ('Hume's Law'). By contrast, in so far as a fact is understood as relating to an institution (marriage, promises, commitments, etc.), which is to be understood as a system of constitutive rules, then what one should do can be understood as following from the institutional fact of what one has done; institutional fact, then, can be understood as opposed to the "brute facts" related to Hume's Law. For example, Searle believes the fact that you promised to do something means you should do it, because by making the promise you are participating in the constitutive rules that arrange the system of promise making itself, and therefore understand a "shouldness" as implicit in the mere factual action of promising. Furthermore, he believes that this provides a desire-independent reason for an action—if you order a drink at a bar, you should pay for it even if you have no desire to. This argument, which he first made in his paper, "How to Derive 'Ought' from 'Is'" (1964),[66] remains highly controversial, but even three decades later Searle continued to defend his view that "..the traditional metaphysical distinction between fact and value cannot be captured by the linguistic distinction between 'evaluative' and 'descriptive' because all such speech act notions are already normative."[67]

Third, Searle argues that much of rational deliberation involves adjusting our (often inconsistent) patterns of desires to decide between outcomes, not the other way around. While in the Classical Model, one would start from a desire to go to Paris greater than that of saving money and calculate the cheapest way to get there, in reality people balance the niceness of Paris against the costs of travel to decide which desire (visiting Paris or saving money) they value more. Hence, he believes rationality is not a system of rules, but more of an adverb. We see certain behavior as rational, no matter what its source, and our system of rules derives from finding patterns in what we see as rational.

Bibliography[edit]

Primary[edit]

  • Speech Acts: An Essay in the Philosophy of Language (1969), Cambridge University Press, ISBN 978-0521096263[2]
  • The Campus War: A Sympathetic Look at the University in Agony (political commentary; 1971)
  • Expression and Meaning: Studies in the Theory of Speech Acts (essay collection; 1979)
  • Intentionality: An Essay in the Philosophy of Mind (1983)
  • Minds, Brains and Science: The 1984 Reith Lectures (lecture collection; 1984)
  • Foundations of Illocutionary Logic (John Searle & Daniel Vanderveken 1985)
  • The Rediscovery of the Mind (1992)
  • The Construction of Social Reality (1995)
  • The Mystery of Consciousness (review collection; 1997)
  • Mind, Language and Society: Philosophy in the Real World (summary of earlier work; 1998)
  • Rationality in Action (2001)
  • Consciousness and Language (essay collection; 2002)
  • Freedom and Neurobiology (lecture collection; 2004)
  • Mind: A Brief Introduction (summary of work in philosophy of mind; 2004)
  • Philosophy in a New Century: Selected Essays (2008)
  • Making the Social World: The Structure of Human Civilization (2010)
  • “What Your Computer Can’t Know” (review of Luciano Floridi, The Fourth Revolution: How the Infosphere Is Reshaping Human Reality, Oxford University Press, 2014; and Nick Bostrom, Superintelligence: Paths, Dangers, Strategies, Oxford University Press, 2014), The New York Review of Books, vol. LXI, no. 15 (October 9, 2014), pp. 52–55.
  • Seeing Things As They Are: A Theory of Perception (2015)

Secondary[edit]

  • John Searle and His Critics (Ernest Lepore and Robert Van Gulick, eds.; 1991)
  • John Searle (Barry Smith, ed.; 2003)
  • John Searle and the Construction of Social Reality (Joshua Rust; 2006)
  • Intentional Acts and Institutional Facts (Savas Tsohatzidis, ed.; 2007)
  • John Searle (Joshua Rust; 2009)

See also[edit]

References[edit]

  1. ^"Introduction: John Searle in Czech Context"(PDF). Sav.sk. 2012. Retrieved April 21, 2017. 
  2. ^ ab"Archived copy". Archived from the original on 2015-09-23. Retrieved 2015-06-11. 
  3. ^ ab"President Bush Awards 2004 National Humanities Medals". NEH.gov. Retrieved April 21, 2017. 
  4. ^ abhttp://www.ditext.com/searle/campus/1.html
  5. ^"Socrates and Berkeley Scholars Web Hosting Services Have Been Retired - Web Platform Services". socrates.berkeley.edu. 
  6. ^http://www.ditext.com/searle/campus/4.html
  7. ^"The Campus War". Retrieved 2012-03-24. 
  8. ^See Searle v. City of Berkeley Rent Stabilization Bd. (1988) 197 Cal.App.3d 1251, 1253 [243 Cal.Rptr. 449]
  9. ^California, Berkeley Daily Planet, Berkeley (December 14, 2004). "Letters to the Editor. Category: Features from The Berkeley Daily Planet". BerkeleyDailyPlanet.com. Retrieved April 21, 2017. 
  10. ^Gerald Korngold, Whatever Happened to Landlord-Tenant Law?, 77 Neb. L. Rev. (1998). Available at: http://digitalcommons.unl.edu/nlr/vol77/iss4/5
  11. ^http://ist-socrates.berkeley.edu/~jsearle/pdf/terrorism.pdf
  12. ^Watanabe, Tessa (March 23, 2017). "Lawsuit alleges that a UC Berkeley professor sexually assaulted his researcher and cut her pay when she rejected him". Los Angeles Times. Retrieved March 28, 2017. 
  13. ^Fraley, Malaika (March 23, 2017). "Berkeley: Renowned philosopher John Searle accused of sexual assault and harassment at UC Berkeley". East Bay Times. Retrieved March 28, 2017. 
  14. ^Flaherty, Colleen (March 23, 2017). "Sexual misconduct and the Faculty Code: In wake of scandals, U of California strengthens faculty policies against sexual harassment and assault". Inside Higher Ed. Retrieved March 28, 2017. 
  15. ^Baker, Katie J. M. (April 7, 2017). "UC Berkeley Was Warned About Its Star Professor Years Before Sexual Harassment Lawsuit". BuzzFeedNews. Retrieved April 8, 2017. 
  16. ^ abBaker, Katie J.M. (March 24, 2017). "A Former Student Says UC Berkeley's Star Philosophy Professor Groped Her And Watched Porn At Work". BuzzFeedNews. Retrieved March 28, 2017.  Contains facsimile of the suit.
  17. ^Tate, Emily (April 10, 2017). "Earlier Complaints on Professor Accused of Harassment". Inside Higher Ed. 
  18. ^Reviewing an episode of the Channel 4 series Voices: "On the one hand, Sir John Eccles, a quiet-spoken theist with the most devastating way of answering questions with a single "yes", on the other, Professor Searle, a flamboyant atheist using words I've never heard of or likely to again "now we know that renal secretions synthesize a substance called angiotensin and that angiotensin gets into the hypothalamus and causes a series of neuron firings". " Peter Dear, 'Today's television and radio programmes', The Times, February 22, 1984; pg. 31; Issue 61764; col A.
  19. ^John R. Searle (1969). Speech Acts: An Essay in the Philosophy of Language. Cambridge University Press. ISBN 9780521096263. 
  20. ^John R. Searle, Daniel Vanderveken (1985). Foundations of Illocutionary Logic. Cambridge University Press. ISBN 0-521-26324-7. 
  21. ^Although Searle does not mention earlier uses of the concept, it originates from Alexander Sesonske's article "Performatives".
  22. ^Burkhardt, Armin (ed.), Speech Acts, Meaning and Intentions: Critical Approaches to the Philosophy of John R. Searle. Berlin / New York 1990.
  23. ^Lepore, Ernest / van Gulick, Robert (eds): John Searle and his Critics. Oxford: Basil Blackwell 1991.
  24. ^Derrida, Jacques. Limited, Inc. Northwestern University Press, 1988. p. 29: "...I have read some of his [Searle's] work (more, in any case, than he seems to have read of mine)"
  25. ^Maclean, Ian. 2004. "un dialogue de sourds? Some implications of the Austin–Searle–Derrida debate", in Jacques Derrida: critical thought. Ian Maclachlan (ed.) Ashgate Publishing, 2004
  26. ^ abc"Another Look at the Derrida-Searle Debate". Mark Alfino. Philosophy & Rhetoric, Vol. 24, No. 2 (1991), pp. 143–152 [1]
  27. ^Simon Glendinning. 2001. Arguing with Derrida. Wiley-Blackwell. p. 18
  28. ^Gregor Campbell. 1993. "John R. Searle" in Irene Rima Makaryk (ed). Encyclopedia of contemporary literary theory: approaches, scholars, terms. University of Toronto Press, 1993
  29. ^John Searle, "Reiterating the Différences: A Reply to Derrida", Glyph 2 (Baltimore MD: Johns Hopkins University Press, 1977).
  30. ^ abMarian Hobson. 1998. Jacques Derrida: opening lines. Psychology Press. pp. 95–97
  31. ^Jacques Derrida, "Afterwords" in Limited, Inc. (Northwestern University Press, 1988), p. 133
  32. ^Farrell, F. B. (1988). "Iterability and meaning: the Searle–Derrida debate". Metaphilosophy. 19: 53–64. doi:10.1111/j.1467-9973.1988.tb00701. 
  33. ^"With the Compliments of the Author: Reflections on Austin and Derrida". Stanley E. Fish. Critical Inquiry, Vol. 8, No. 4 (Summer 1982), pp. 693-721.
  34. ^"Derrida, Searle, Contexts, Games, Riddles". Edmond Wright. New Literary History, Vol. 13, No. 3 ("Theory: Parodies, Puzzles, Paradigms"), Spring 1982, pp. 463–477.
  35. ^"Convention and Meaning: Derrida and Austin". Jonathan Culler. New Literary History, Vol. 13, No. 1 ("On Convention: I"), Autumn 1981, pp. 15–30.
  36. ^Kenaan, Hagi. "Language, philosophy and the risk of failure: rereading the debate between Searle and Derrida". Continental Philosophy Review. 35 (2): 117–133. doi:10.1023/A:1016583115826. 
  37. ^Raffel, Stanley. "Understanding Each Other: The Case of the Derrida-Searle Debate"(PDF). Human Studies. 34 (3): 277–292. doi:10.1007/s10746-011-9189-6. 
  38. ^Searle, John (1995). The Construction of Social Reality. London: Allen Lane The Penguin P. pp. 159–60. 
  39. ^Derrida, Jacques (1976). Of Grammatology. Translated by Gayatri Chakravorty Spivak. Baltimore: Johns Hopkins U.P. p. 158. 
  40. ^Collins Robert French-English English-French Dictionary (2 ed.). London/Paris: Collins/Robert. 1987. 
  41. ^Searle, Intentionality (1983)
  42. ^Searle "Making the Social World: The Structure of Human Civilization" (2010) p. 48-62
  43. ^Searle, Intentionality (1983); The Rediscovery of the Mind (1992) ch. 8
  44. ^"Literary Theory and Its Discontents", New Literary History, 640
  45. ^Searle, The Rediscovery of the Mind (1992), p.177
  46. ^Searle, The Rediscovery of the Mind (1992), p.185
  47. ^John Searle, "Reiterating the Différences: A Reply to Derrida'"', Glyph 2 (Baltimore MD: Johns Hopkins University Press, 1977 p. 202
  48. ^Gerald Graff. 1988. Summary of Reiterating the differences. in Derrida, JAcques. Limited Inc. p. 26.
  49. ^Searle, J R: The Mystery of Consciousness (1997) p. 95-131
  50. ^Searle, J R: The Mystery of Consciousness (1997) p.122
  51. ^Searle, J R: The Mystery of Consciousness (1997) p.112
  52. ^

1. Overview

Work in Artificial Intelligence (AI) has produced computer programs that can beat the world chess champion and defeat the best human players on the television quiz show Jeopardy. AI has also produced programs with which one can converse in natural language, including Apple's Siri. Our experience shows that playing chess or Jeopardy, and carrying on a conversation, are activities that require understanding and intelligence. Does computer prowess at challenging games and conversation then show that computers can understand and be intelligent? Will further development result in digital computers that fully match or even exceed human intelligence? Alan Turing (1950), one of the pioneer theoreticians of computing, believed the answer to these questions was “yes”. Turing proposed what is now known as “The Turing Test”: if a computer can pass for human in online chat, we should grant that it is intelligent. By the late 1970s some AI researchers claimed that computers already understood at least some natural language. In 1980 U.C. Berkeley philosopher John Searle introduced a short and widely-discussed argument intended to show conclusively that it is impossible for digital computers to understand language or think.

Searle argues that a good way to test a theory of mind, say a theory that holds that understanding can be created by doing such and such, is to imagine what it would be like to do what the theory says would create understanding. Searle (1999) summarized the Chinese Room argument concisely:

Imagine a native English speaker who knows no Chinese locked in a room full of boxes of Chinese symbols (a data base) together with a book of instructions for manipulating the symbols (the program). Imagine that people outside the room send in other Chinese symbols which, unknown to the person in the room, are questions in Chinese (the input). And imagine that by following the instructions in the program the man in the room is able to pass out Chinese symbols which are correct answers to the questions (the output). The program enables the person in the room to pass the Turing Test for understanding Chinese but he does not understand a word of Chinese.

Searle goes on to say, “The point of the argument is this: if the man in the room does not understand Chinese on the basis of implementing the appropriate program for understanding Chinese then neither does any other digital computer solely on that basis because no computer, qua computer, has anything the man does not have.”

Thirty years later Searle 2010 describes the conclusion in terms of consciousness and intentionality:

I demonstrated years ago with the so-called Chinese Room Argument that the implementation of the computer program is not by itself sufficient for consciousness or intentionality (Searle 1980). Computation is defined purely formally or syntactically, whereas minds have actual mental or semantic contents, and we cannot get from syntactical to the semantic just by having the syntactical operations and nothing else. To put this point slightly more technically, the notion “same implemented program” defines an equivalence class that is specified independently of any specific physical realization. But such a specification necessarily leaves out the biologically specific powers of the brain to cause cognitive processes. A system, me, for example, would not acquire an understanding of Chinese just by going through the steps of a computer program that simulated the behavior of a Chinese speaker (p.17).

Searle's shift from machine understanding to consciousness and intentionality is not directly supported by the original 1980 argument. However the re-description of the conclusion indicates the close connection between understanding and consciousness in Searle's accounts of meaning and intentionality. Those who don't accept Searle's linking account might hold that running a program can create understanding without necessarily creating consciousness, and a robot might have creature consciousness without necessarily understanding natural language.

Thus Searle develops the broader implications of his argument. It aims to refute the functionalist approach to understanding minds, the approach that holds that mental states are defined by their causal roles, not by the stuff (neurons, transistors) that plays those roles. The argument counts especially against that form of functionalism known as the Computational Theory of Mind that treats minds as information processing systems. As a result of its scope, as well as Searle's clear and forceful writing style, the Chinese Room argument has probably been the most widely discussed philosophical argument in cognitive science to appear since the Turing Test. By 1991 computer scientist Pat Hayes had defined Cognitive Science as the ongoing research project of refuting Searle's argument. Cognitive psychologist Steven Pinker (1997) pointed out that by the mid-1990s well over 100 articles had been published on Searle's thought experiment—and that discussion of it was so pervasive on the Internet that Pinker found it a compelling reason to remove his name from all Internet discussion lists.

This interest has not subsided, and the range of connections with the argument has broadened. A search on Google Scholar for “Searle Chinese Room” limited to the period from 2010 through early 2014 produced over 750 results, including papers making connections between the argument and topics ranging from embodied cognition to theater to talk psychotherapy to postmodern views of truth and “our post-human future” – as well as discussions of group or collective minds and discussions of the role of intuitions in philosophy. This wide-range of discussion and implications is a tribute to the argument's simple clarity and centrality.

2. Historical Background

2.1 Leibniz’ Mill

Searle's argument has three important antecedents. The first of these is an argument set out by the philosopher and mathematician Gottfried Leibniz (1646–1716). This argument, often known as “Leibniz’ Mill”, appears as section 17 of Leibniz’ Monadology. Like Searle's argument, Leibniz’ argument takes the form of a thought experiment. Leibniz asks us to imagine a physical system, a machine, that behaves in such a way that it supposedly thinks and has experiences (“perception”).

17. Moreover, it must be confessed that perception and that which depends upon it are inexplicable on mechanical grounds, that is to say, by means of figures and motions. And supposing there were a machine, so constructed as to think, feel, and have perception, it might be conceived as increased in size, while keeping the same proportions, so that one might go into it as into a mill. That being so, we should, on examining its interior, find only parts which work one upon another, and never anything by which to explain a perception. Thus it is in a simple substance, and not in a compound or in a machine, that perception must be sought for. [Robert Latta translation]

Notice that Leibniz's strategy here is to contrast the overt behavior of the machine, which might appear to be the product of conscious thought, with the way the machine operates internally. He points out that these internal mechanical operations are just parts moving from point to point, hence there is nothing that is conscious or that can explain thinking, feeling or perceiving. For Leibniz physical states are not sufficient for, nor constitutive of, mental states.

2.2 Turing's Paper Machine

A second antecedent to the Chinese Room argument is the idea of a paper machine, a computer implemented by a human. This idea is found in the work of Alan Turing, for example in “Intelligent Machinery” (1948). Turing writes there that he wrote a program for a “paper machine” to play chess. A paper machine is a kind of program, a series of simple steps like a computer program, but written in natural language (e.g., English), and followed by a human. The human operator of the paper chess-playing machine need not (otherwise) know how to play chess. All the operator does is follow the instructions for generating moves on the chess board. In fact, the operator need not even know that he or she is involved in playing chess—the input and output strings, such as “N–QB7” need mean nothing to the operator of the paper machine.

Turing was optimistic that computers themselves would soon be able to exhibit apparently intelligent behavior, answering questions posed in English and carrying on conversations. Turing (1950) proposed what is now known as the Turing Test: if a computer could pass for human in on-line chat, it should be counted as intelligent. By the late 1970s, as computers became faster and less expensive, some in the burgeoning AI community claimed that their programs could understand English sentences, using a database of background information. The work of one of these, Yale researcher Roger Schank (Schank & Abelson 1977) came to the attention of John Searle (Searle's U.C. Berkeley colleague Hubert Dreyfus was an earlier critic of the claims made by AI researchers). Schank developed a technique called “conceptual representation” that used “scripts” to represent conceptual relations (a form of Conceptual Role Semantics). Searle's argument was originally presented as a response to the claim that AI programs such as Schank's literally understand the sentences that they respond to.

2.3 The Chinese Nation

A third more immediate antecedent to the Chinese Room argument emerged in early discussion of functionalist theories of minds and cognition. Functionalists hold that mental states are defined by the causal role they play in a system (just as a door stop is defined by what it does, not by what it is made out of). Critics of functionalism were quick to turn its proclaimed virtue of multiple realizability against it. In contrast with type-type identity theory, functionalism allowed beings with different physiology to have the same types of mental states as humans—pains, for example. But it was pointed out that if aliens could realize the functional properties that constituted mental states, then, presumably so could systems even less like human brains. The computational form of functionalism is particularly vulnerable to this maneuver, since a wide variety of systems with simple components are computationally equivalent (see e.g., Maudlin 1989 for a computer built from buckets of water). Critics asked if it was really plausible that these inorganic systems could have mental states or feel pain.

Daniel Dennett (1978) reports that in 1974 Lawrence Davis gave a colloquium at MIT in which he presented one such unorthodox implementation. Dennett summarizes Davis' thought experiment as follows:

Let a functionalist theory of pain (whatever its details) be instantiated by a system the subassemblies of which are not such things as C-fibers and reticular systems but telephone lines and offices staffed by people. Perhaps it is a giant robot controlled by an army of human beings that inhabit it. When the theory's functionally characterized conditions for pain are now met we must say, if the theory is true, that the robot is in pain. That is, real pain, as real as our own, would exist in virtue of the perhaps disinterested and businesslike activities of these bureaucratic teams, executing their proper functions.

In “Troubles with Functionalism”, also published in 1978, Ned Block envisions the entire population of China implementing the functions of neurons in the brain. This scenario has subsequently been called “The Chinese Nation” or “The Chinese Gym”. We can suppose that every Chinese citizen would be given a call-list of phone numbers, and at a preset time on implementation day, designated “input” citizens would initiate the process by calling those on their call-list. When any citizen's phone rang, he or she would then phone those on his or her list, who would in turn contact yet others. No phone message need be exchanged; all that is required is the pattern of calling. The call-lists would be constructed in such a way that the patterns of calls implemented the same patterns of activation that occur between neurons in someone's brain when that person is in a mental state—pain, for example. The phone calls play the same functional role as neurons causing one another to fire. Block was primarily interested in qualia, and in particular, whether it is plausible to hold that the population of China might collectively be in pain, while no individual member of the population experienced any pain, but the thought experiment applies to any mental states and operations, including understanding language.

Thus Block's precursor thought experiment, as with those of Davis and Dennett, is a system of many humans rather than one. The focus is on consciousness, but to the extent that Searle's argument also involves consciousness, the thought experiment is closely related to Searle's.

3. The Chinese Room Argument

In 1980 John Searle published “Minds, Brains and Programs” in the journal The Behavioral and Brain Sciences. In this article, Searle sets out the argument, and then replies to the half-dozen main objections that had been raised during his earlier presentations at various university campuses (see next section). In addition, Searle's article in BBS was published along with comments and criticisms by 27 cognitive science researchers. These 27 comments were followed by Searle's replies to his critics.

In the decades following its publication, the Chinese Room argument was the subject of very many discussions. By 1984, Searle presented the Chinese Room argument in a book, Minds, Brains and Science. In January 1990, the popular periodical Scientific American took the debate to a general scientific audience. Searle included the Chinese Room Argument in his contribution, “Is the Brain's Mind a Computer Program?”, and Searle's piece was followed by a responding article, “Could a Machine Think?”, written by philosophers Paul and Patricia Churchland. Soon thereafter Searle had a published exchange about the Chinese Room with another leading philosopher, Jerry Fodor (in Rosenthal (ed.) 1991).

The heart of the argument is an imagined human simulation of a computer, similar to Turing's Paper Machine. The human in the Chinese Room follows English instructions for manipulating Chinese symbols, where a computer “follows” a program written in a computing language. The human produces the appearance of understanding Chinese by following the symbol manipulating instructions, but does not thereby come to understand Chinese. Since a computer just does what the human does—manipulate symbols on the basis of their syntax alone—no computer, merely by following a program, comes to genuinely understand Chinese.

This narrow argument, based closely on the Chinese Room scenario, is specifically directed at a position Searle calls “Strong AI”. Strong AI is the view that suitably programmed computers (or the programs themselves) can understand natural language and actually have other mental capabilities similar to the humans whose behavior they mimic. According to Strong AI, these computers really play chess intelligently, make clever moves, or understand language. By contrast, “weak AI” is the much more modest claim that computers are merely useful in psychology, linguistics, and other areas, in part because they can simulate mental abilities. But weak AI makes no claim that computers actually understand or are intelligent. The Chinese Room argument is not directed at weak AI, nor does it purport to show that no machine can think—Searle says that brains are machines, and brains think. The argument is directed at the view that formal computations on symbols can produce thought.

We might summarize the narrow argument as a reductio ad absurdum against Strong AI as follows. Let L be a natural language, and let us say that a “program for L” is a program for conversing fluently in L. A computing system is any system, human or otherwise, that can run a program.

  1. If Strong AI is true, then there is a program for Chinese such that if any computing system runs that program, that system thereby comes to understand Chinese.
  2. I could run a program for Chinese without thereby coming to understand Chinese.
  3. Therefore Strong AI is false.

The second premise is supported by the Chinese Room thought experiment. The conclusion of this narrow argument is that running a program cannot endow the system with language understanding. Searle's wider argument includes the claim that the thought experiment shows more generally that one cannot get semantics (meaning) from syntax (formal symbol manipulation). That and related issues are discussed in the section The Larger Philosophical Issues.

4. Replies to the Chinese Room Argument

Criticisms of the narrow Chinese Room argument against Strong AI have often followed three main lines, which can be distinguished by how much they concede:

(1) Some critics concede that the man in the room doesn't understand Chinese, but hold that nevertheless running the program may create something that understands Chinese. These critics object to the inference from the claim that the man in the room does not understand Chinese to the conclusion that no understanding has been created. There might be understanding by a larger, or different, entity. This is the strategy of The Systems Reply and the Virtual Mind Reply. These replies hold that the output of the room reflects understanding of Chinese, but the understanding is not that of the room's operator. Thus Searle's claim that he doesn't understand Chinese while running the room is conceded, but his claim that there is no understanding, and that computationalism is false, is denied.

(2) Other critics concede Searle's claim that just running a natural language processing program as described in the CR scenario does not create any understanding, whether by a human or a computer system. But these critics hold that a variation on the computer system could understand. The variant might be a computer embedded in a robotic body, having interaction with the physical world via sensors and motors (“The Robot Reply”), or it might be a system that simulated the detailed operation of an entire brain, neuron by neuron (“the Brain Simulator Reply”).

(3) Finally, some critics do not concede even the narrow point against AI. These critics hold that the man in the original Chinese Room scenario might understand Chinese, despite Searle's denials, or that the scenario is impossible. For example, critics have argued that our intuitions in such cases are unreliable. Other critics have held that it all depends on what one means by “understand”—points discussed in the section on The Intuition Reply. Others (e.g. Sprevak 2007) object to the assumption that any system (e.g. Searle in the room) can run any computer program. And finally some have argued that if it is not reasonable to attribute understanding on the basis of the behavior exhibited by the Chinese Room, then it would not be reasonable to attribute understanding to humans on the basis of similar behavioral evidence (Searle calls this last the “Other Minds Reply”). The objection is that we should be willing to attribute understanding in the Chinese Room on the basis of the overt behavior, just as we do with other humans (and some animals), and as we would do with extra-terrestrial Aliens (or burning bushes or angels) that spoke our language.

In addition to these responses specifically to the Chinese Room scenario and the narrow argument to be discussed here, some critics also independently argue against Searle's larger claim, and hold that one can get semantics (that is, meaning) from syntactic symbol manipulation, including the sort that takes place inside a digital computer, a question discussed in the section below on Syntax and Semantics.

4.1 The Systems Reply

In the original BBS article, Searle identified and discussed several responses to the argument that he had come across in giving the argument in talks at various places. As a result, these early responses have received the most attention in subsequent discussion. What Searle 1980 calls “perhaps the most common reply” is the Systems Reply.

The Systems Reply, which Searle says was originally associated with Yale, concedes that the man in the room does not understand Chinese. But, the reply continues, the man is but a part, a central processing unit (CPU), in a larger system. The larger system includes the huge database, the memory (scratchpads) containing intermediate states, and the instructions—the complete system that is required for answering the Chinese questions. So the Sytems Reply is that while the man running the program does not understand Chinese, the system as a whole does.

Ned Block was one of the first to press the Systems Reply, along with many others including Jack Copeland, Daniel Dennett, Jerry Fodor, John Haugeland, Ray Kurzweil and Georges Rey. Rey (1986) says the person in the room is just the CPU of the system. Kurzweil (2002) says that the human being is just an implementer and of no significance (presumably meaning that the properties of the implementer are not necessarily those of the system). Kurzweil hews to the spirit of the Turing Test and holds that if the system displays the apparent capacity to understand Chinese “it would have to, indeed, understand Chinese”—Searle is contradicting himself in saying in effect, “the machine speaks Chinese but doesn't understand Chinese”.

Margaret Boden (1988) raises levels considerations. “Computational psychology does not credit the brain with seeing bean-sprouts or understanding English: intentional states such as these are properties of people, not of brains” (244). “In short, Searle's description of the robot's pseudo-brain (that is, of Searle-in-the-robot) as understanding English involves a category-mistake comparable to treating the brain as the bearer, as opposed to the causal basis, of intelligence”. Boden (1988) points out that the room operator is a conscious agent, while the CPU in a computer is not—the Chinese Room scenario asks us to take the perspective of the implementer, and not surprisingly fails to see the larger picture.

Searle's response to the Systems Reply is simple: in principle, the man can internalize the entire system, memorizing all the instructions and the database, and doing all the calculations in his head. He could then leave the room and wander outdoors, perhaps even conversing in Chinese. But he still would have no way to attach “any meaning to the formal symbols”. The man would now be the entire system, yet he still would not understand Chinese. For example, he would not know the meaning of the Chinese word for hamburger. He still cannot get semantics from syntax. (See below the section on Syntax and Semantics).

In his 2002 paper “The Chinese Room from a Logical Point of View”, Jack Copeland considers Searle's response to the Systems Reply and argues that a homunculus inside Searle's head might understand even though the room operator himself does not, just as modules in minds solve tensor equations that enable us to catch cricket balls. Copeland then turns to consider the Chinese Gym, and again appears to endorse the Systems Reply: “…the individual players [do not] understand Chinese. But there is no entailment from this to the claim that the simulation as a whole does not come to understand Chinese. The fallacy involved in moving from part to whole is even more glaring here than in the original version of the Chinese Room Argument”. Copeland denies that connectionism implies that a room of people can simulate the brain.

John Haugeland writes (2002) that Searle's response to the Systems Reply is flawed: “…what he now asks is what it would be like if he, in his own mind, were consciously to implement the underlying formal structures and operations that the theory says are sufficient to implement another mind”. According to Haugeland, his failure to understand Chinese is irrelevant: he is just the implementer. The larger system implemented would understand—there is a level-of-description fallacy.

Shaffer 2009 examines modal aspects of the logic of the CRA and argues that familiar versions of the System Reply are question-begging. But, Shaffer claims, a modalized version of the System Reply succeeds because there are possible worlds in which understanding is an emergent property of complex syntax manipulation. Nute 2011 is a reply to Shaffer.

Stevan Harnad has defended Searle's argument against Systems Reply critics in two papers. In his 1989 paper, Harnad writes “Searle formulates the problem as follows: Is the mind a computer program? Or, more specifically, if a computer program simulates or imitates activities of ours that seem to require understanding (such as communicating in language), can the program itself be said to understand in so doing?” (Note the specific claim: the issue is taken to be whether the program itself understands.) Harnad concludes: “On the face of it, [the CR argument] looks valid. It certainly works against the most common rejoinder, the ‘Systems Reply’….” Harnad appears to follow Searle in linking understanding and states of consciousness: Harnad 2012 (Other Internet Resources) argues that Searle shows that the core problem of conscious “feeling” requires sensory connections to the real world.

4.1.1 The Virtual Mind Reply

The Virtual Mind reply concedes, as does the System Reply, that the operator of the Chinese Room does not understand Chinese merely by running the paper machine. However the Virtual Mind reply holds that what is important is whether understanding is created, not whether the Room operator is the agent that understands. Unlike the Systems Reply, the Virtual Mind reply (VMR) holds that a running system may create new, virtual, entities that are distinct from both the system as a whole, as well as from the sub-systems such as the CPU or operator. In particular, a running system might create a distinct agent that understands Chinese. This virtual agent would be distinct from both the room operator and the entire system. The psychological traits, including linguistic abilities, of any mind created by artificial intelligence will depend entirely upon the program and the Chinese database, and will not be identical with the psychological traits and abilities of a CPU or the operator of a paper machine, such as Searle in the Chinese Room scenario. According to the VMR the mistake in the Chinese Room Argument is to make the claim of strong AI to be “the computer understands Chinese” or “the System understands Chinese”. The claim at issue for AI should simply be whether “the running computer creates understanding of Chinese”.

A familiar model of virtual agents are characters in computer or video games, and personal digital assistants, such as Apple's Siri and Microsoft's Cortana. These characters have various abilities and personalities, and the characters are not identical with the system hardware or program that creates them. A single running system might control two distinct agents, or physical robots, simultaneously, one of which converses only in Chinese and one of which can converse only in English, and which otherwise manifest very different personalities, memories, and cognitive abilities. Thus the VM reply asks us to distinguish between minds and their realizing systems.

Minsky (1980) and Sloman and Croucher (1980) suggested a Virtual Mind reply when the Chinese Room argument first appeared. In his widely-read 1989 paper “Computation and Consciousness”, Tim Maudlin considers minimal physical systems that might implement a computational system running a program. His discussion revolves around his imaginary Olympia machine, a system of buckets that transfers water, implementing a Turing machine. Maudlin's main target is the computationalists' claim that such a machine could have phenomenal consciousness. However in the course of his discussion, Maudlin considers the Chinese Room argument. Maudlin (citing Minsky, and Sloman and Croucher) points out a Virtual Mind reply that the agent that understands could be distinct from the physical system (414). Thus “Searle has done nothing to discount the possibility of simultaneously existing disjoint mentalities” (414–5).

Perlis (1992), Chalmers (1996) and Block (2002) have apparently endorsed versions of a Virtual Mind reply as well, as has Richard Hanley in The Metaphysics of Star Trek (1997). Penrose (2002) is a critic of this strategy, and Stevan Harnad scornfully dismisses such heroic resorts to metaphysics. Harnad defended Searle's position in a “Virtual Symposium on Virtual Minds” (1992) against Patrick Hayes and Don Perlis. Perlis pressed a virtual minds argument derived, he says, from Maudlin. Chalmers (1996) notes that the room operator is just a causal facilitator, a “demon”, so that his states of consciousness are irrelevant to the properties of the system as a whole. Like Maudlin, Chalmers raises issues of personal identity—we might regard the Chinese Room as “two mental systems realized within the same physical space. The organization that gives rise to the Chinese experiences is quite distinct from the organization that gives rise to the demon's [= room operator's] experiences”(326).

Cole (1991, 1994) develops the reply and argues as follows: Searle's argument requires that the agent of understanding be the computer itself or, in the Chinese Room parallel, the person in the room. However Searle's failure to understand Chinese in the room does not show that there is no understanding being created. If we flesh out the conversation in the original CR scenario to include questions in Chinese such as “How tall are you?”, “Where do you live?”, “What did you have for breakfast?”, “What is your attitude toward Mao?”, and so forth, it immediately becomes clear that the answers in Chinese are not Searle's answers. Searle is not the author of the answers, and his beliefs and desires, memories and personality traits are not reflected in the answers and, apart from his industriousness!, are causally inert in producing the answers to the Chinese questions. Hence if there is understanding of Chinese created by running the program, the mind understanding the Chinese would not be the computer, nor, in the Chinese Room, would the person understanding Chinese be the room operator. The person understanding the Chinese would be a distinct person from the room operator, with beliefs and desires bestowed by the program and its database. Hence Searle's failure to understand Chinese while operating the room does not show that understanding is not being created.

Cole (1991) offers an additional argument that the mind doing the understanding is neither the mind of the room operator nor the system consisting of the operator and the program: running a suitably structured computer program might produce answers submitted in Chinese and also answers to questions submitted in Korean. Yet the Chinese answers might apparently display completely different knowledge and memories, beliefs and desires than the answers to the Korean questions—along with a denial that the Chinese answerer knows any Korean, and vice versa. Thus the behavioral evidence would be that there were two non-identical minds (one understanding Chinese only, and one understanding Korean only). Since these might have mutually exclusive psychological properties, they cannot be identical, and ipso facto, cannot be identical with the mind of the implementer in the room. Analogously, a video game might include a character with one set of cognitive abilities (smart, understands Chinese) as well as another character with an incompatible set (stupid, English monoglot). These inconsistent cognitive traits cannot be traits of the XBOX system that realizes them. The implication seems to be that minds generally are more abstract than the systems that realize them (see Mind and Body in the Larger Philosophical Issues section).

In short, the Virtual Mind argument is that since the evidence that Searle provides that there is no understanding of Chinese was that he wouldn't understand Chinese in the room, the Chinese Room Argument cannot refute a differently formulated equally strong AI claim, asserting the possibility of creating understanding using a programmed digital computer. Maudlin (1989) says that Searle has not adequately responded to this criticism.

Others however have replied to the VMR, including Stevan Harnad and mathematical physicist Roger Penrose. Penrose is generally sympathetic to the points Searle raises with the Chinese Room argument, and has argued against the Virtual Mind reply. Penrose does not believe that computational processes can account for consciousness, both on Chinese Room grounds, as well as because of limitations on formal systems revealed by Kurt Gödel's incompleteness proof. (Penrose has two books on mind and consciousness; Chalmers and others have responded to Penrose's appeals to Gödel.) In his 2002 article “Consciousness, Computation, and the Chinese Room” that specifically addresses the Chinese Room argument, Penrose argues that the Chinese Gym variation—with a room expanded to the size of India, with Indians doing the processing—shows it is very implausible to hold there is “some kind of disembodied ‘understanding’ associated with the person's carrying out of that algorithm, and whose presence does not impinge in any way upon his own consciousness” (230–1). Penrose concludes the Chinese Room argument refutes Strong AI. Christian Kaernbach (2005) reports that he subjected the virtual mind theory to an empirical test, with negative results.

4.2 The Robot Reply

The Robot Reply concedes Searle is right about the Chinese Room scenario: it shows that a computer trapped in a computer room cannot understand language, or know what words mean. The Robot reply is responsive to the problem of knowing the meaning of the Chinese word for hamburger—Searle's example of something the room operator would not know. It seems reasonable to hold that we know what a hamburger is because we have seen one, and perhaps even made one, or tasted one, or at least heard people talk about hamburgers and understood what they are by relating them to things we do know by seeing, making, and tasting. Given this is how one might come to know what hamburgers are, the Robot Reply suggests that we put a digital computer in a robot body, with sensors, such as video cameras and microphones, and add effectors, such as wheels to move around with, and arms with which to manipulate things in the world. Such a robot—a computer with a body—could do what a child does, learn by seeing and doing. The Robot Reply holds that such a digital computer in a robot body, freed from the room, could attach meanings to symbols and actually understand natural language. Margaret Boden, Tim Crane, Daniel Dennett, Jerry Fodor, Stevan Harnad, Hans Moravec and Georges Rey are among those who have endorsed versions of this reply at one time or another. The Robot Reply in effect appeals to “wide content” or “externalist semantics”. This can agree with Searle that syntax and internal connections are insufficient for semantics, while holding that suitable causal connections with the world can provide content to the internal symbols.

Searle does not think this reply to the Chinese Room argument is any stronger than the Systems Reply. All the sensors do is provide additional input to the computer—and it will be just syntactic input. We can see this by making a parallel change to the Chinese Room scenario. Suppose the man in the Chinese Room receives, in addition to the Chinese characters slipped under the door, a stream of binary digits that appear, say, on a ticker tape in a corner of the room. The instruction books are augmented to use the numerals from the tape as input, along with the Chinese characters. Unbeknownst to the man in the room, the symbols on the tape are the digitized output of a video camera (and possibly other sensors). Searle argues that additional syntactic inputs will do nothing to allow the man to associate meanings with the Chinese characters. It is just more work for the man in the room.

Jerry Fodor, Hilary Putnam, and David Lewis, were principle architects of the computational theory of mind that Searle's wider argument attacks. In his original 1980 reply to Searle, Fodor allows Searle is certainly right that “instantiating the same program as the brain does is not, in and of itself, sufficient for having those propositional attitudes characteristic of the organism that has the brain.” But Fodor holds that Searle is wrong about the robot reply. A computer might have propositional attitudes if it has the right causal connections to the world—but those are not ones mediated by a man sitting in the head of the robot. We don't know what the right causal connections are. Searle commits the fallacy of inferring from “the little man is not the right causal connection” to conclude that no causal linkage would succeed. There is considerable empirical evidence that mental processes involve “manipulation of symbols”; Searle gives us no alternative explanation (this is sometimes called Fodor's “Only Game in Town” argument for computational approaches). Since this time, Fodor has written extensively on what the connections must be between a brain state and the world for the state to have intentional (representational) properties, while most recently emphasizing that computationalism has limits because the computations are intrinsically local and so cannot account for abductive reasoning.

In a later piece, “Yin and Yang in the Chinese Room” (in Rosenthal 1991 pp.524–525), Fodor substantially revises his 1980 view. He distances himself from his earlier version of the robot reply, and holds instead that “instantiation” should be defined in such a way that the symbol must be the proximate cause of the effect—no intervening guys in a room. So Searle in the room is not an instantiation of a Turing Machine, and “Searle's setup does not instantiate the machine that the brain instantiates.” He concludes: “…Searle's setup is irrelevant to the claim that strong equivalence to a Chinese speaker's brain is ipso facto sufficient for speaking Chinese.” Searle says of Fodor's move, “Of all the zillions of criticisms of the Chinese Room argument, Fodor's is perhaps the most desperate. He claims that precisely because the man in the Chinese room sets out to implement the steps in the computer program, he is not implementing the steps in the computer program. He offers no argument for this extraordinary claim.” (in Rosenthal 1991, p. 525)

In a 1986 paper, Georges Rey advocated a combination of the system and robot reply, after noting that the original Turing Test is insufficient as a test of intelligence and understanding, and that the isolated system Searle describes in the room is certainly not functionally equivalent to a real Chinese speaker sensing and acting in the world. In a 2002 second look, “Searle's Misunderstandings of Functionalism and Strong AI”, Rey again defends functionalism against Searle, and in the particular form Rey calls the “computational-representational theory of thought—CRTT”. CRTT is not committed to attributing thought to just any system that passes the Turing Test (like the Chinese Room). Nor is it committed to a conversation manual model of understanding natural language. Rather, CRTT is concerned with intentionality, natural and artificial (the representations in the system are semantically evaluable—they are true or false, hence have aboutness). Searle saddles functionalism with the “blackbox” character of behaviorism, but functionalism cares how things are done. Rey sketches “a modest mind”—a CRTT system that has perception, can make deductive and inductive inferences, makes decisions on basis of goals and representations of how the world is, and can process natural language by converting to and from its native representations. To explain the behavior of such a system we would need to use the same attributions needed to explain the behavior of a normal Chinese speaker.

Tim Crane discusses the Chinese Room argument in his 1991 book, The Mechanical Mind. He cites the Churchlands' luminous room analogy, but then goes on to argue that in the course of operating the room, Searle would learn the meaning of the Chinese: “…if Searle had not just memorized the rules and the data, but also started acting in the world of Chinese people, then it is plausible that he would before too long come to realize what these symbols mean.”(127). (Rapaport 2006 presses an analogy between Helen Keller and the Chinese Room.) Crane appears to end with a version of the Robot Reply: “Searle's argument itself begs the question by (in effect) just denying the central thesis of AI—that thinking is formal symbol manipulation. But Searle's assumption, none the less, seems to me correct … the proper response to Searle's argument is: sure, Searle-in-the-room, or the room alone, cannot understand Chinese. But if you let the outside world have some impact on the room, meaning or ‘semantics' might begin to get a foothold. But of course, this concedes that thinking cannot be simply symbol manipulation.” (129)

Margaret Boden 1988 also argues that Searle mistakenly supposes programs are pure syntax. But programs bring about the activity of certain machines: “The inherent procedural consequences of any computer program give it a toehold in semantics, where the semantics in question is not denotational, but causal.” (250) Thus a robot might have causal powers that enable it to refer to a hamburger.

Stevan Harnad also finds important our sensory and motor capabilities: “Who is to say that the Turing Test, whether conducted in Chinese or in any other language, could be successfully passed without operations that draw on our sensory, motor, and other higher cognitive capacities as well? Where does the capacity to comprehend Chinese begin and the rest of our mental competence leave off?” Harnad believes that symbolic functions must be grounded in “robotic” functions that connect a system with the world. And he thinks this counts against symbolic accounts of mentality, such as Jerry Fodor's, and, one suspects, the approach of Roger Schank that was Searle's original target. Harnad 2012 (Other Internet Resources) argues that the CRA shows that even with a robot with symbols grounded in the external world, there is still something missing: feeling, such as the feeling of understanding.

4.3 The Brain Simulator Reply

Consider a computer that operates in quite a different manner than the usual AI program with scripts and operations on sentence-like strings of symbols. The Brain Simulator reply asks us to suppose instead the program simulates the actual sequence of nerve firings that occur in the brain of a native Chinese language speaker when that person understands Chinese—every nerve, every firing. Since the computer then works the very same way as the brain of a native Chinese speaker, processing information in just the same way, it will understand Chinese. Paul and Patricia Churchland have set out a reply along these lines, discussed below.

In response to this, Searle argues that it makes no difference. He suggests a variation on the brain simulator scenario: suppose that in the room the man has a huge set of valves and water pipes, in the same arrangement as the neurons in a native Chinese speaker's brain. The program now tells the man which valves to open in response to input. Searle claims that it is obvious that there would be no understanding of Chinese. (Note however that the basis for this claim is no longer simply that Searle himself wouldn't understand Chinese – it seems clear that now he is just facilitating the causal operation of the system and so we rely on our Leibnizian intuition that water-works don't understand (see also Maudlin 1989).) Searle concludes that a simulation of brain activity is not the real thing.

However, following Pylyshyn 1980, Cole and Foelber 1984, Chalmers 1996, we might wonder about hybrid systems. Pylyshyn writes:

If more and more of the cells in your brain were to be replaced by integrated circuit chips, programmed in such a way as to keep the input-output function each unit identical to that of the unit being replaced, you would in all likelihood just keep right on speaking exactly as you are doing now except that you would eventually stop meaning anything by it. What we outside observers might take to be words would become for you just certain noises that circuits caused you to make.

These cyborgization thought experiments can be linked to the Chinese Room. Suppose Otto has a neural disease that causes one of the neurons in my brain to fail, but surgeons install a tiny remotely controlled artificial neuron, a synron, along side his disabled neuron. The control of Otto's neuron is by John Searle in the Chinese Room, unbeknownst to both of them. Tiny wires connect the artificial neuron to the synapses on the cell-body of his disabled neuron. When his artificial neuron is stimulated by neurons that synapse on his disabled neuron, a light goes on in the Chinese Room. Searle then manipulates some valves and switches in accord with a program. That, via the radio link, causes Otto's artificial neuron to release neuro-transmitters from its tiny artificial vesicles. If Searle's programmed activity causes Otto's artificial neuron to behave just as his disabled natural neuron once did, the behavior of the rest of my nervous system will be unchanged. Alas, Otto's disease progresses; more neurons are replaced by synrons controlled by Searle. Ex hypothesis the rest of the world will not notice the difference; will Otto?

Under the rubric “The Combination Reply”, Searle also considers a system with the features of all three of the preceding: a robot with a digital brain simulating computer in its cranium, such that the system as a whole behaves indistinguishably from a human. Since the normal input to the brain is from sense organs, it is natural to suppose that most advocates of the Brain Simulator Reply have in mind such a combination of brain simulation, Robot, and Systems Reply. Some (e.g. Rey 1986) argue it is reasonable to attribute intentionality to such a system as a whole. Searle agrees that it would be reasonable to attribute understanding to such an android system—but only as long as you don't know how it works. As soon as you know the truth—it is a computer, uncomprehendingly manipulating symbols on the basis of syntax, not meaning—you would cease to attribute intentionality to it.

(One assumes this would be true even if it were one's spouse, with whom one had built a life-long relationship, that was revealed to hide a silicon secret. Science fiction stories, including episodes of Rod Serling's television series The Twilight Zone, have been based on such possibilities; Steven Pinker (1997) mentions one episode in which the android's secret was known from the start, but the protagonist developed a romantic relationship with the android.)

On its tenth anniversary the Chinese Room argument was featured in the general science periodical Scientific American. Leading the opposition to Searle's lead article in that issue were philosophers Paul and Patricia Churchland. The Churchlands agree with Searle that the Chinese Room does not understand Chinese, but hold that the argument itself exploits our ignorance of cognitive and semantic phenomena. They raise a parallel case of “The Luminous Room” where someone waves a magnet and argues that the absence of resulting visible light shows that Maxwell's electromagnetic theory is false. The Churchlands advocate a view of the brain as a connectionist system, a vector transformer, not a system manipulating symbols according to structure-sensitive rules. The system in the Chinese Room uses the wrong computational strategies. Thus they agree with Searle against traditional AI, but they presumably would endorse what Searle calls “the Brain Simulator Reply”, arguing that, as with the Luminous Room, our intuitions fail us when considering such a complex system, and it is a fallacy to move from part to whole: “… no neuron in my brain understands English, although my whole brain does.”

In his 1991 book, Microcognition. Andy Clark holds that Searle is right that a computer running Schank's program does not know anything about restaurants, “at least if by ‘know’ we mean anything like ‘understand’”. But Searle thinks that this would apply to any computational model, while Clark, like the Churchlands, holds that Searle is wrong about connectionist models. Clark's interest is thus in the brain-simulator reply. The brain thinks in virtue of its physical properties. What physical properties of the brain are important? Clark answers that what is important about brains are “variable and flexible substructures” which conventional AI systems lack. But that doesn't mean computationalism or functionalism is false. It depends on what level you take the functional units to be. Clark defends “microfunctionalism”—one should look to a fine-grained functional description, e.g. neural net level. Clark cites William Lycan approvingly contra Block's absent qualia objection—yes, there can be absent qualia, if the functional units are made large. But that does not constitute a refutation of functionalism generally. So Clark's views are not unlike the Churchlands', conceding that Searle is right about Schank and symbolic-level processing systems, but holding that he is mistaken about connectionist systems.

Similarly Ray Kurzweil (2002) argues that Searle's argument could be turned around to show that human brains cannot understand—the brain succeeds by manipulating neurotransmitter concentrations and other mechanisms that are in themselves meaningless. In criticism of Searle's response to the Brain Simulator Reply, Kurzweil says: “So if we scale up Searle's Chinese Room to be the rather massive ‘room’ it needs to be, who's to say that the entire system of a hundred trillion people simulating a Chinese Brain that knows Chinese isn't conscious? Certainly, it would be correct to say that such a system knows Chinese. And we can't say that it is not conscious anymore than we can say that about any other process. We can't know the subjective experience of another entity….”

4.4 The Other Minds Reply

Related to the preceding is The Other Minds Reply: “How do you know that other people understand Chinese or anything else? Only by their behavior. Now the computer can pass the behavioral tests as well as they can (in principle), so if you are going to attribute cognition to other people you must in principle also attribute it to computers. ”

Searle's (1980) reply to this is very short:

The problem in this discussion is not about how I know that other people have cognitive states, but rather what it is that I am attributing to them when I attribute cognitive states to them. The thrust of the argument is that it couldn't be just computational processes and their output because the computational processes and their output can exist without the cognitive state. It is no answer to this argument to feign anesthesia. In ‘cognitive sciences’ one presupposes the reality and knowability of the mental in the same way that in physical sciences one has to presuppose the reality and knowability of physical objects.

Critics hold that if the evidence we have that humans understand is the same as the evidence we might have that a visiting extra-terrestrial alien understands, which is the same as the evidence that a robot understands, the presuppositions we may make in the case of our own species are not relevant, for presuppositions are sometimes false. For similar reasons, Turing, in proposing the Turing Test, is specifically worried about our presuppositions and chauvinism. If the reasons for the presuppositions regarding humans are pragmatic, in that they enable us to predict the behavior of humans and to interact effectively with them, perhaps the presupposition could apply equally to computers (similar considerations are pressed by Dennett, in his discussions of what he calls the Intentional Stance).

Searle raises the question of just what we are attributing in attributing understanding to other minds, saying that it is more than complex behavioral dispositions. For Searle the additional seems to be certain states of consciousness, as is seen in his 2010 summary of the CRA conclusions. Terry Horgan (2013) endorses this claim: “the real moral of Searle's Chinese room thought experiment is that genuine original intentionality requires the presence of internal states with intrinsic phenomenal character that is inherently intentional…” But this tying of understanding to phenomenal consciousness raises a host of issues.

We attribute limited understanding of language to toddlers, dogs, and other animals, but it is not clear that we are ipso facto attributing unseen states of subjective consciousness – what do we know of the hidden states of exotic creatures? Ludwig Wittgenstein (the Private Language Arugment) and his followers pressed similar points. Altered qualia possibilities, analogous to the inverted spectrum, arise: suppose I ask “what's the sum of 5 and 7” and you respond “the sum of 5 and 7 is 12”, but as you heard my question you had the conscious experience of hearing and understanding “what is the sum of 10 and 14”, though you were in the computational states appropriate for producing the correct sum and so said “12”. Are there certain conscious states that are “correct” for certain functional states? The underlying problem of epiphenomenality is one familiar from inverted spectrum problems – it is difficult to see what subjective consciousness adds if it is not itself functionally important.

In the 30 years since the CRA there has been philosophical interest in zombies – creatures that look like and behave just as normal humans, including linguistic behavior, yet have no subjective consciousness. A difficulty for claiming that subjective states of consciousness are crucial for understanding meaning will arise in these cases of absent qualia: we can't tell the difference between zombies and non-zombies, and so on Searle's account we can't tell the difference between those that really understand English and those that don't. But then there appears to be a distinction without a difference. In any case, Searle's short reply to the Other Minds Reply may be too short.

Descartes argued famously that speech was sufficient for attributing minds and consciousness to others, and argued infamously that it was necessary. Turing was in effect endorsing Descartes' sufficiency condition, at least for intelligence, while substituting written for oral linguistic behavior. Since most of us use dialog as a sufficient condition for attributing understanding, Searle's argument, which holds that speech is a sufficient condition for humans (in all states of sleep-walking, stroke?) but not for anything that doesn't share our biology, an account would appear to be required of what additionally is being attributed, and what can justify the additional attribution. Further, if being con-specific is key on Searle's account, a natural question arises as to what circumstances would justify us in attributing understanding (or consciousness) to extra-terrestrial aliens who do not share our biology? Offending ET's by withholding attributions of understanding until after a post-mortem may be risky.

Hans Moravec, director of the Robotics laboratory at Carnegie Mellon University, and author of Robot: Mere Machine to Transcendent Mind, argues that Searle's position merely reflects intuitions from traditional philosophy of mind that are out of step with the new cognitive science. Moravec endorses a version of the Other Minds reply. It makes sense to attribute intentionality to machines for the same reasons it makes sense to attribute them to humans; his “interpretative position” is similar to the views of Daniel Dennett. Moravec goes on to note that one of the things we attribute to others is the ability to make attributions of intentionality, and then we make such attributions to ourselves. It is such self-representation that is at the heart of consciousness. These capacities appear to be implementation independent, and hence possible for aliens and suitably programmed computers.

Presumably the reason that Searle thinks we can disregard the evidence in the case of robots and computers is that we know that their processing is syntactic, and this fact trumps all other considerations. Indeed, Searle believes this is the larger point that the Chinese Room merely illustrates. This larger point is addressed in the Syntax and Semantics section below.

4.5 The Intuition Reply

Many responses to the Chinese Room argument have noted that, as with Leibniz’ Mill, the argument appears to be based on intuition: the intuition that a computer (or the man in the room) cannot think or have understanding. For example, Ned Block (1980) in his original BBS commentary says “Searle's argument depends for its force on intuitions that certain entities do not think.” But, Block argues, (1) intuitions sometimes can and should be trumped and (2) perhaps we need to bring our concept of understanding in line with a reality in which certain computer robots belong to the same natural kind as humans. Similarly Margaret Boden (1988) points out that we can't trust our untutored intuitions about how mind depends on matter; developments in science may change our intuitions. Indeed, elimination of bias in our intuitions was what motivated Turing (1950) to propose the Turing Test, a test that was blind to the physical character of the system replying to questions. Some of Searle's critics in effect argue that he has merely pushed the reliance on intuition back, into the room.

Critics argue that our intuitions regarding both intelligence and understanding may be unreliable, and perhaps incompatible even with current science. With regard to understanding, Steven Pinker, in How the Mind Works (1997), holds that “… Searle is merely exploring facts about the English word understand…. People are reluctant to use the word unless certain stereotypical conditions apply…” But, Pinker claims, nothing scientifically speaking is at stake. Pinker objects to Searle's appeal to the “causal powers of the brain” by noting that the apparent locus of the causal powers is the “patterns of interconnectivity that carry out the right information processing”. Pinker ends his discussion by citing a science fiction story in which Aliens, anatomically quite unlike humans, cannot believe that humans think when they discover that our heads are filled with meat. The Aliens' intuitions are unreliable—presumably ours may be so as well.

Other critics are also concerned with how our understanding of understanding bears on the Chinese Room argument. In their paper “A Chinese Room that Understands” AI researchers Simon and Eisenstadt (2002) argue that whereas Searle refutes “logical strong AI”, the thesis that a program that passes the Turing Test will necessarily understand, Searle's argument does not impugn “Empirical Strong AI”—the thesis that it is possible to program a computer that convincingly satisfies ordinary criteria of understanding. They hold however that it is impossible to settle these questions “without employing a definition of the term ‘understand’ that can provide a test for judging whether the hypothesis is true or false”. They cite W.V.O. Quine's Word and Object as showing that there is always empirical uncertainty in attributing understanding to humans. The Chinese Room is a Clever Hans trick (Clever Hans was a horse who appeared to clomp out the answers to simple arithmetic questions, but it was discovered that Hans could detect unconscious cues from his trainer). Similarly, the man in the room doesn't understand Chinese, and could be exposed by watching him closely. (Simon and Eisenstadt do not explain just how this would be done, or how it would affect the argument.) Citing the work of Rudolf Carnap, Simon and Eisenstadt argue that to understand is not just to exhibit certain behavior, but to use “intensions” that determine extensions, and that one can see in actual programs that they do use appropriate intensions. They discuss three actual AI programs, and defend various attributions of mentality to them, including understanding, and conclude that computers understand; they learn “intensions by associating words and other linguistic structure with their denotations, as detected through sensory stimuli”. And since we can see exactly how the machines work, “it is, in fact, easier to establish that a machine exhibits understanding that to establish that a human exhibits understanding….” Thus, they conclude, the evidence for empirical strong AI is overwhelming.

Similarly, Daniel Dennett in his original 1980 response to Searle's argument called it “an intuition pump”, a term he came up with in discussing the CRA with Hofstader. Dennett's considered view (2013) is that the CRA is “clearly a fallacious and misleading argument ….” (p. 320). Paul Thagard (2013) proposes that for every thought experiment in philosophy there is an equal and opposite thought experiment. Thagard holds that intuitions are unreliable, and the CRA is an example (and that in fact the CRA has now been refuted by the technology of autonomous robotic cars). Dennett has elaborated on concerns about our intuitions regarding intelligence. Dennett 1987 (“Fast Thinking”) expressed concerns about the slow speed at which the Chinese Room would operate, and he has been joined by several other commentators, including Tim Maudlin, David Chalmers, and Steven Pinker. The operator of the Chinese Room may eventually produce appropriate answers to Chinese questions. But slow thinkers are stupid, not intelligent—and in the wild, they may well end up dead. Dennett argues that “speed … is ‘of the essence’ for intelligence. If you can't figure out the relevant portions of the changing environment fast enough to fend for yourself, you are not practically intelligent, however complex you are” (326). Thus Dennett relativizes intelligence to processing speed relative to current environment. Tim Maudlin (1989) disagrees. Maudlin considers the time-scale problem pointed to by other writers, and concludes, contra Dennett, that the extreme slowness of a computational system does not violate any necessary conditions on thinking or consciousness.

Steven Pinker (1997) also holds that Searle relies on untutored intuitions. Pinker endorses the Churchlands' (1990) counterexample of an analogous thought experiment of waving a magnet and not generating light, noting that this outcome would not disprove Maxwell's theory that light consists of electromagnetic waves. Pinker holds that the key issue is speed: “The thought experiment slows down the waves to a range to which we humans no longer see them as light. By trusting our intuitions in the thought experiment, we falsely conclude that rapid waves cannot be light either. Similarly, Searle has slowed down the mental computations to a range in which we humans no longer think of it as understanding (since understanding is ordinarily much faster)” (94–95). Howard Gardiner, a supporter of Searle's conclusions regarding the room, makes a similar point about understanding. Gardiner addresses the Chinese Room argument in his book The Mind's New Science (1985, 171–177). Gardiner considers all the standard replies to the Chinese Room argument and concludes that Searle is correct about the room: “…the word understand has been unduly stretched in the case of the Chinese room ….” (175).

Thus several in this group of critics argue that speed affects our willingness to attribute intelligence and understanding to a slow system, such as that in the Chinese Room. The result may simply be that our intuitions regarding the Chinese Room are unreliable, and thus the man in the room, in implementing the program, may understand Chinese despite intuitions to the contrary (Maudlin and Pinker). Or it may be that the slowness marks a crucial difference between the simulation in the room and what a fast computer does, such that the man is not intelligent while the computer system is (Dennett).

5. The Larger Philosophical Issues

5.1 Syntax and Semantics

Searle believes the Chinese Room argument supports a larger point, which explains the failure of the Chinese Room to produce understanding. Searle argued that programs implemented by computers are just syntactical. Computer operations are “formal” in that they respond only to the physical form of the strings of symbols, not to the meaning of the symbols. Minds on the other hand have states with meaning, mental contents. We associate meanings with the words or signs in language. We respond to signs because of their meaning, not just their physical appearance. In short, we understand. But, and according to Searle this is the key point, “Syntax is not by itself sufficient for, nor constitutive of, semantics.” So although computers may be able to manipulate syntax to produce appropriate responses to natural language input, they do not understand the sentences they receive or output, for they cannot associate meanings with the words.

Searle (1984) presents a three premise argument that because syntax is not sufficient for semantics, programs cannot produce minds.

  1. Programs are purely formal (syntactic).
  2. Human minds have mental contents (semantics).
  3. Syntax by itself is neither constitutive of, nor sufficient for, semantic content.
  4. Therefore, programs by themselves are not constitutive of nor sufficient for minds.

The Chinese Room thought experiment itself is the support for the third premise. The claim that syntactic manipulation is not sufficient for meaning or thought is a significant issue, with wider implications than AI, or attributions of understanding. Prominent theories of mind hold that human cognition generally is computational. In one form, it is held that thought involves operations on symbols in virtue of their physical properties. On an alternative connectionist account, the computations are on “subsymbolic” states. If Searle is right, not only Strong AI but also these main approaches to understanding human cognition are misguided.

As we have seen, Searle holds that the Chinese Room scenario shows that one cannot get semantics from syntax alone. In formal logic systems, a kind of artificial language, rules are given for syntax, and this procedure appears to be quite independent of semantics. The logician specifies the basic symbol set and some rules for manipulating strings to produce new ones. These rules are purely formal or syntactic—they are applied to strings of symbols solely in virtue of their syntax or form. A semantics, if any, for the symbol system must be provided separately. And if one wishes to show that interesting additional relationships hold between the syntactic operations and semantics, such as that the symbol manipulations preserve truth, one must provide sometimes complex meta-proofs to show this. So on the face of it, semantics is quite independent of syntax for artificial languages, and one cannot get semantics from syntax alone. “Formal symbols by themselves can never be enough for mental contents, because the symbols, by definition, have no meaning (or interpretation, or semantics) except insofar as someone outside the system gives it to them” (Searle 1989, 45).

Searle's identification of meaning with interpretation in this passage is important. Searle's point is clearly true of the causally inert formal systems of logicians. When we move from formal systems to computational systems, the situation is more complex. As many of Searle's critics (e.g. Cole 1984, Dennett 1987, Boden 1988, and Chalmers 1996) have noted, a computer running a program is not the same as “syntax alone”. A computer is an enormously complex electronic causal system. State changes in the system are physical. One can interpret the physical states, e.g. voltages, as syntactic 1's and 0's, but the intrinsic reality is electronic and the syntax is “derived”, a product of interpretation. The states are syntactically specified by programmers, but they are fundamentally states of a complex causal system embedded in the real world. This is quite different from the abstract formal systems that logicians study. Dennett notes that no “computer program by itself” (Searle's language)—e.g. a program lying on a shelf—can cause anything, even simple addition, let alone mental states. The program must be running. Chalmers (1996) offers a parody in which it is reasoned that recipes are syntactic, syntax is not sufficient for crumbliness, cakes are crumbly, so implementation of a recipe is not sufficient for making a cake. Dennett (1987) sums up the issue: “Searle's view, then, comes to this: take a material object (any material object) that does not have the power of causing mental phenomena; you cannot turn it in to an object that does have the power of producing mental phenomena simply by programming it—reorganizing the conditional dependencies of transitions between its states.” Dennett's view is the opposite: programming “is precisely what could give something a mind”. But Dennett claims that in fact it is “empirically unlikely that the right sorts of programs can be run on anything but organic, human brains” (325–6).

A further related complication is that it is not clear that computers perform syntactic operations in quite the same sense that a human does—it is not clear that a computer understands syntax or syntactic operations. A computer does not know that it is manipulating 1's and 0's. A computer does not recognize that its binary data strings have a certain form, and thus that certain syntactic rules may be applied to them, unlike the man inside the Chinese Room. Inside a computer, there is nothing that literally reads input data, or that “knows” what symbols are. Instead, there are millions of transistors that change states. A sequence of voltages causes operations to be performed. We humans may choose to interpret these voltages as binary numerals and the voltage changes as syntactic operations, but a computer does not interpret its operations as syntactic or any other way. So perhaps a computer does not need to make the move from syntax to semantics that Searle objects to; it needs to move from complex causal connections to semantics. Furthermore, perhaps any causal system is describable as performing syntactic operations—if we interpret a light square as logical “0” and a dark square as logical “1”, then a kitchen toaster may be described as a device that rewrites logical “0”s as logical “1”s.

In the 1990s, Searle began to use considerations related to these to argue that computational views are not just false, but lack a clear sense. Computation, or syntax, is “observer-relative”, not an intrinsic feature of reality: “…you can assign a computational interpretation to anything” (Searle 2002b, p. 17), even the molecules in the paint on the wall. Since nothing is intrinsically computational, one cannot have a scientific theory that reduces the mental, which is not observer-relative, to computation, which is. “Computation exists only relative to some agent or observer who imposes a computational interpretation on some phenomenon. This is an obvious point. I should have seen it ten years ago, but I did not.” (Searle 2002b, p.17, originally published 1993).

Critics note that walls are not computers; unlike a wall, a computer goes through state-transitions that are counterfactually described by a program (Chalmers 1996, Block 2002, Haugeland 2002). In his 2002 paper, Block addresses the question of whether a wall is a computer (in reply to Searle's charge that anything that maps onto a formal system is a formal system, whereas minds are quite different). Block denies that whether or not something is a computer depends entirely on our interpretation. Block notes that Searle ignores the counterfactuals that must be true of an implementing system. Haugeland (2002) makes the similar point that an implementation will be a causal process that reliably carries out the operations—and they must be the right causal powers. Block concludes that Searle's arguments fail, but he concedes that they “do succeed in sharpening our understanding of the nature of intentionality and its relation to computation and representation” (78).

Rey (2002) also addresses Searle's arguments that syntax and symbols are observer-relative properties, not physical. Searle infers this from the fact that they are not defined in physics; it does not follow that they are observer-relative. Rey argues that Searle also misunderstands what it is to realize a program. Rey endorses Chalmers' reply to Putnam: a realization is not just a structural mapping, but involves causation, supporting counterfactuals. “This point is missed so often, it bears repeating: the syntactically specifiable objects over which computations are defined can and standardly do possess a semantics; it's just that the semantics is not involved in the specification.” States of a person have their semantics in virtue of computational organization and their causal relations to the world. Rey concludes: Searle “simply does not consider the substantial resources of functionalism and Strong AI.” (222) A plausibly detailed story would defuse negative conclusions drawn from the superficial sketch of the system in the Chinese Room.

John Haugeland (2002) argues that there is a sense in which a processor must intrinsically understand the commands in the programs it runs: it executes them in accord with the specifications. “The only way that we can make sense of a computer as executing a program is by understanding its processor as responding to the program prescriptions as meaningful” (385). Thus operation symbols have meaning to a system. Haugeland goes on to draw a distinction between narrow and wide system. He argues that data can have semantics in the wide system that includes representations of external objects produced by transducers. In passing, Haugeland makes the unusual claim, argued for elsewhere, that genuine intelligence and semantics presuppose “the capacity for a kind of commitment in how one lives” which is non-propositional—that is, love (cp. Steven Spielberg's 2001 film Artificial Intelligence: AI).

To Searle's claim that syntax is observer-relative, that the molecules in a wall might be interpreted as implementing the Wordstar program (an early word processing program) because “there is some pattern in the molecule movements which is isomorphic with the formal structure of Wordstar” (Searle 1990b, p. 27), Haugeland counters that “the very idea of a complex syntactical token … presupposes specified processes of writing and reading….” The tokens must be systematically producible and retrievable. So no random isomorphism or pattern somewhere (e.g. on some wall) is going to count, and hence syntax is not observer-relative.

With regard to the question of whether one can get semantics from syntax, William Rapaport has for many years argued for “syntactic semantics”, a view in which understanding is a special form of syntactic structure in which symbols (such as Chinese words) are linked to concepts, themselves represented syntactically. Others believe we are not there yet. AI futurist (The Age of Spiritual Machines) Ray Kurzweil holds in a 2002 follow-up book that it is red herring to focus on traditional symbol-manipulating computers. Kurzweil agrees with Searle that existent computers do not understand language—as evidenced by the fact that they can't engage in convincing dialog. But that failure does not bear on the capacity of future computers based on different technology. Kurzweil claims that Searle fails to understand that future machines will use “chaotic emergent methods that are massively parallel”. This claim appears to be similar to that of connectionists, such as Andy Clark, and the position taken by the Churchlands in their 1990 Scientific American article.

Apart from Haugeland's claim that processors understand program instructions, Searle's critics can agree that computers no more understand syntax than they understand semantics, although, like all causal engines, a computer has syntactic descriptions. And while it is often useful to programmers to treat the machine as if it performed syntactic operations, it is not always so: sometimes the characters programmers use are just switches that make the machine do something, for example, make a given pixel on the computer display turn red, or make a car transmission shift gears. Thus it is not clear that Searle is correct when he says a digital computer is just “a device which manipulates symbols”. Computers are complex causal engines, and syntactic descriptions are useful in order to structure the causal interconnections in the machine. AI programmers face many tough problems, but one can hold that they do not have to get semantics from syntax. If they are to get semantics, they must get it from causality.

Two main approaches have developed that explain meaning in terms of causal connections. The internalist approaches, such as Schank's and Rapaport's conceptual representation approaches, and also Conceptual Role Semantics, hold that a state of a physical system gets its semantics from causal connections to other states of the same system. Thus a state of a computer might represent “kiwi” because it is connected to “bird” and “flightless” nodes, and perhaps also to images of prototypical kiwis. The state that represents the property of being “flightless” might get its content from a Negation-operator modifying a representation of “capable of airborne self-propulsion”, and so forth, to form a vast connected conceptual network, a kind of mental dictionary.

Externalist approaches developed by Dennis Stampe, Fred Dretske, Hilary Putnam, Jerry Fodor, Ruth Millikan, and others, hold that states of a physical system get their content through causal connections to the external reality they represent. Thus, roughly, a system with a KIWI concept is one that has a state it uses to represent the presence of kiwis in the external environment. This kiwi-representing state can be any state that is appropriately causally connected to the presence of kiwis. Depending on the system, the kiwi representing state could be a state of a brain, or of an electrical device such as a computer, or even of a hydraulic system. The internal representing state can then in turn play a causal role in the determining the behavior of the system. For example, Rey (1986) endorses an indicator semantics along the lines of the work of Dennis Stampe (1977) and Fodor's Psychosemantics. Semantics that emphasize causal connection with the world fit well with the Robot Reply. A computer in a robot body might have just the causal connections that could allow its inner syntactic states to have the semantic property of representing states of things in its environment.

Thus there are at least two families of theories (and marriages of the two, as in Block 1986) about how semantics might depend upon causal connections. Both of these attempt to provide accounts that are substance neutral: states of suitably organized causal systems can have content, no matter what the systems are made of. On these theories a computer could have states that have meaning. It is not necessary that the computer be aware of its own states and know that they have meaning, nor that any outsider appreciate the meaning of the states. On either of these accounts meaning depends upon the (possibly complex) causal connections, and digital computers are systems designed to have states that have just such complex causal dependencies. It should be noted that Searle does not subscribe to these theories of semantics. Instead, Searle's discussions of linguistic meaning have often centered on the notion of intentionality.

5.2 Intentionality

Intentionality is the property of being about something, having content. In the 19th Century, psychologist Franz Brentano re-introduced this term from Medieval philosophy and held that intentionality was the “mark of the mental”. Beliefs and desires are intentional states: they have propositional content (one believes that p, one desires that p, where sentences substitute for “p” ). Searle's views regarding intentionality are complex; of relevance here is that he makes a distinction between the original or intrinsic intentionality of genuine mental states, and the derived intentionality of language. A written or spoken sentence only has derivative intentionality insofar as it is interpreted by someone. It appears that on Searle's view, original intentionality can at least potentially be conscious. Searle then argues that the distinction between original and derived intentionality applies to computers. We can interpret the states of a computer as having content, but the states themselves do not have original intentionality. Many philosophers endorse this intentionality dualism, including Fodor (2009), despite his many differences with Searle.

In a section of her 1988 book, Computer Models of the Mind, Margaret Boden notes that intentionality is not well-understood—reason to not put too much weight on arguments that turn on intentionality. Furthermore, insofar as we understand the brain, we focus on informational functions, not unspecified causal powers of the brain: “…from the psychological point of view, it is not the biochemistry as such which matters but the information-bearing functions grounded in it.” (241) Responders to Searle have argued that he displays substance chauvinism, in holding that brains understand but systems made of silicon with comparable information processing capabilities cannot, even in principle. Papers on both sides of the issue appeared, such as J. Maloney's 1987 paper “The Right Stuff”, defending Searle, and R. Sharvy's 1985 critique, “It Ain't the Meat, it's the Motion”. AI proponents such as Kurzweil (1999, see also Richards 2002) have continued to hold that AI systems can potentially have such mental properties as understanding, intelligence, consciousness and intentionality, and will exceed human abilities in these areas.

Other critics of Searle's position take intentionality more seriously than Boden does, but deny his dualistic distinction between original and derived intentionality. Dennett (1987, e.g.) argues that all intentionality is derived. Attributions of intentionality—to animals, other people, even ourselves—are instrumental and allow us to predict behavior, but they are not descriptions of intrinsic properties. As we have seen, Dennett is concerned about the slow speed of things in the Chinese Room, but he argues that once a system is working up to speed, it has all that is needed for intelligence and derived intentionality—and derived intentionality is the only kind that there is, according to Dennett. A machine can be an intentional system because intentional explanations work in predicting the machine's behavior. Dennett also suggests that Searle conflates intentionality with awareness of intentionality. In his syntax-semantic arguments, “Searle has apparently confused a claim about the underivability of semantics from syntax with a claim about the underivability of the consciousness of semantics from syntax” (336). We might also worry that Searle conflates meaning and interpretation, and that Searle’s original or underived intentionality is just second-order intentionality, a representation of what an intentional object means. Dretske and others have seen intentionality as information-based. One state of the world, including a state in a computer, may carry information about other states in the world, and this informational aboutness is a mind-independent feature of states. Hence it is a mistake to hold that conscious attributions of meaning are the source of intentionality.

Others have noted that Searle's discussion has shown a shift from issues of intentionality and understanding to issues of consciousness. Searle links intentionality to awareness of intentionality, in that intentional states are at least potentially conscious. In his 1996 book, The Conscious Mind, David Chalmers notes that although Searle originally directs his argument against machine intentionality, it is clear from later writings that the real issue is consciousness, which Searle holds is a necessary condition of intentionality. It is consciousness that is lacking in digital computers. Chalmers uses thought experiments to argue that it is implausible that one system has some basic mental property (such as having qualia) that another system lacks, if it is possible to imagine transforming one system into the other, either gradually (as replacing neurons one at a time by digital circuits), or all at once, switching back and forth between flesh and silicon.

A second strategy regarding the attribution of intentionality is taken by externalist critics who in effect argue that intentionality is an intrinsic feature of states of physical systems that are causally connected with the world in the right way, independently of interpretation (see the preceding Syntax and Semantics section). Fodor's semantic externalism is influenced by Fred Dretske, but they come to different conclusions with regard to the semantics of states of computers. Over a period of years, Dretske developed an historical account of meaning or mental content that would preclude attributing beliefs and understanding to most machines. But Dretske (1985) agrees with Searle that adding machines don't literally add; we do the adding, using the machines. Dretske emphasizes the crucial role of natural selection and learning in producing states that have genuine content. Human built systems will be, at best, like Swampmen (beings that result from a lightning strike in a swamp and by chance happen to be a molecule by molecule copy of some human being, say, you)—they appear to have intentionality or mental states, but do not, because such states require the right history. AI states will generally be counterfeits of real mental states; like counterfeit money, they may appear perfectly identical but lack the right pedigree. But Dretske's account of belief appears to make it distinct from conscious awareness of the belief or intentional state (if that is taken to require a higher order thought), and so would allow attribution of intentionality to systems that can learn.

Howard Gardiner endorses Zenon Pylyshyn's criticisms of Searle's view of the relation of brain and intentionality, as supposing that intentionality is somehow a stuff “secreted by the brain”, and Pylyshyn's own counter-thought experiment in which one's neurons are replaced one by one with integrated circuit workalikes (see also Chalmers (1996), for exploration of neuron replacement scenarios). Gardiner holds that Searle owes us a more precise account of intentionality than Searle has given so far, and until then it is an open question whether AI can produce it, or whether it is beyond its scope. Gardiner concludes with the possibility that the dispute between Searle and his critics is not scientific, but (quasi?) religious.

5.3 Mind and Body

Several critics have noted that there are metaphysical issues at stake in the original argument. The Systems Reply draws attention to the metaphysical problem of the relation of mind to body. It does this in holding that understanding is a property of the system as a whole, not the physical implementer. The Virtual Mind Reply holds that minds or persons—the entities that understand and are conscious—are more abstract than any physical system, and that there could be a many-to-one relation between minds and physical systems. Thus larger issues about personal identity and the relation of mind and body are in play in the debate between Searle and some of his critics.

Searle's view is that the problem the relation of mind and body “has a rather simple solution. Here it is: Conscious states are caused by lower level neurobiological processes in the brain and are themselves higher level features of the brain.” (Searle 2002b, p.9 ) In his early discussion of the CR, Searle spoke of the causal powers of the brain. Thus his view appears to be that brain states cause consciousness and understanding, and “consciousness is just a feature of the brain” (ibid).

Consciousness and understanding are features of persons, so it appears that Searle accepts a metaphysics in which I, my conscious self, am identical with my brain—a form of mind-brain identity theory. This very concrete metaphysics is reflected in Searle's original presentation of the CR argument, in which Strong AI was described by him as the claim that “the appropriately programmed computer really is a mind” (Searle 1980). This is an identity claim, and has odd consequences. If A and B are identical, any property of A is a property of B. Computers are physical objects. Some computers weigh 6 lbs and have stereo speakers. So the claim that Searle called Strong AI would entail that some minds weigh 6 lbs and have stereo speakers. However it seems to be clear that while humans may weigh 150 pounds; human minds do not weigh 150 pounds. This suggests that neither bodies nor machines can literally be minds. It appears that minds are more abstract than that, and that at least one version of the claim that Searle calls Strong AI, the version that says that computers are minds, is metaphysically untenable on the face of it, apart from any thought-experiments.

One thought on “Searle S Chinese Room Essay Typer

Leave a Reply

Your email address will not be published. Required fields are marked *