Monday, June 13, 2011

Mailbag Monday: The Chinese Room


Mailbag Monday: A weekly segment that covers readers' questions and concerns about all things Philosophy, Bro, and Philosophy Bro that don't quite fit anywhere else. Send your questions to philosophybro@gmail.com with 'Mailbag Monday' in the subject line.

--



Steven writes, 



What's good Philosophy Bro?
What are your thoughts on the Chinese room experiment by John Searle?  Seems like a problem that only PB could really get to the root of.
Steven, if it weren't summer I'd think you're trying to get me to write an essay for you, but today's your lucky day. The Chinese Room thought experiment is one of the most important thought experiments in philosophy of mind, especially in the project of duplicating intelligence. It's interesting as hell, too, which definitely helps your case.

Before I get into the experiment itself, here's the problem Searle is addressing - what does it mean for a machine to understand something? Is it possible we will ever produce a machine that can think on its own, that can learn and truly understand English? Offhand, it seems possible; after all, if we build a robot that speaks English as well as we do, who understands the nuances of language, who we just can't tell is a robot, then what exactly is the difference between his claim to know English and your best friend's claim to know English? This is the basis of the famous 'Turing Test' for AI - when we reach the point where we can no longer distinguish between a human and a computer when speaking to them, then we have truly achieved Artificial Intelligence.
That makes a lot of sense. It's closely related to the problem of other minds - if we can't even be sure what other people thinking looks like, how the fuck can we say anything about what it means for a robot to think? Entire worlds of science fiction are built around robots who mirror humanity in all the important ways - they act hurt when they should act hurt, they act sad when they should act sad, and they can tell hilarious jokes. And as we get better at simulating systems, some bros have claimed that it won't be long that computers have minds in exactly the same sense as we do. Maybe we can replace Dane Cook with a robot that's exactly like him, except funny. Wouldn't that be nice?

John Searle is not amused with these shenanigans. So, for the uninitiated, the Chinese Room thought experiment runs something like this:
A bro who doesn't know Chinese is in a room with a bunch of files, some pieces of paper and some pencils and two slots on the wall: "In" and "Out". The files are "In" and "Out" tables that have Chinese symbols in them. When a Chinese symbol comes in the "In" slot, the bro looks up the symbol in a table, sketches out the corresponding "Out" symbol on a piece of paper, and feeds it through the "Out" slot.
That's it. Some people who do know Chinese are asking this bro some questions, and he's answering them correctly, even though he has no fucking clue what he's reading or replying. Now, can we say that this bro knows Chinese? Searle says, "Hell no." And it's a pretty convincing argument - this chump is just drawing pictures and passing them out. Someone could have just asked him why he's such an asshole all the time, and he replied, appropriately, "I'm sure I don't know what you mean. Sorry to have offended you." And he's not sorry at all, because he has no idea he offended anyone. He's just, you know, a guy in a room doin' his job.


And that's all machines are really doing - consulting tables of instructions, and outputting what they're told to output. That's all a machine (in the proper sense of a Turing Machine) ever does. Sure, maybe it looks like the robot is actually thinking, or even having emotions - just like if someone fed "your wife died today" into the Chinese room, he would receive a truly heart-wrenching elegy in return - but all that's really happening is he's copying over some symbols. When he's fed that elegy back through the slot, our intrepid hero will kick back and go back to his crossword puzzle with not a hint of emotion. 


Some bros suggest that somehow either the man or some other system that includes the man, like everything in the room, does compose a mind that understands Chinese after all, but Searle just says, "So let the bro memorize the tables. Now you have to say that even though he doesn't know Chinese, him plus the paper and pencil do. Which is just fucking gibberish. Truly embarrassing." 


But of course, even if everything about the experiment functions as described, it's not an open and shut case. Go figure. The 'standard' interpretation, which is to say, Searle's interpretation, is that there is more than just syntax to the mind; we can't duplicate it through complicated formalisms. But what if instead of trying to say computers have minds, we turn it around and say that minds aren't as complicated as we thought? What if the experiment is a perfect demonstration of something else? This is the route that Daniel Dennett takes; rather than saying that the experiment is flawed in some way in design, which is a tall fucking order thanks to its elegant simplicity, he points out a crucial assumption of Searle's: that the mind itself has understanding in a way different than the Chinese room.


What if what we call 'consciousness' is an illusion, a red herring? You know what it feels like to think something, right? To feel an emotion? If you found out you won the lottery, your heart rate would accelerate, you'd sweat a lot, and you'd have the urge to jump for joy. Maybe you'd cry. You'd be forgiven - it's a lot of fucking money. Except all of those things are traceable to physical processes, as is every output you produce to any input. It feels like something special to know what someone says in a way that the man in the room doesn't seem to know - when someone tells you that you won the lottery, you don't just sketch a corresponding symbol that says, "FUCK YES," you genuinely feel that, right? Well, maybe. Or maybe that 'emotion' is just a byproduct of the way the brain processes things. This turns the thought experiment back on its head - what if the instructions are just so detailed that we only feel emotions because they're part of the instructions? Are you sure what we want to call 'consciousness' isn't just more detailed steps?


Of course, that argument has some assumptions too - for example, that our thoughts are indeed traceable to deterministic, mechanistic processes, for example. And, while Dennett postulates a working model of the mind that makes consciousness an illusion, it's not clear that his model is the right one - that experience we call consciousness is pretty fucking convincing.
-- 
As usual, there's a lot more to this question than I can cover here. Isn't this fun?


You can read Searle's draft of the original paper online: Minds, Brains, and Programs [PDF]


Dennett discusses the Chinese Room and other thought experiments in philosophy of mind in his excellent Consciousness Explained.


The Wikipedia page on the Chinese Room has a good explanation of the room and goes into much more detail about various objections or reframings of the experiment. It also details all the ways John Searle holds the fucking line in the face of some complicated objections. You should check it out. 

16 comments:

  1. Perfect. I took Minds and Machines as an elective philosophy course in college; you've done a great job explaining one of the most important concepts covered in that class. High five, broski.

    ReplyDelete
  2. I don't know that the 'memorising the tables' response realy works. I mean, you're not realy talking about the guy plus his pencils and paper. You're talking about the system, which is informational in nature. The person might not understand Chinese, but that doesn't mean the system doesn't. It seems like he's missing the point by eliminating the basic physical trappings away without eliminating the information.

    Now, I'm not sure that the objection is a good one to begin with - I wouldn't consider someone able to speak a language unless they're able to use it creatively to generate novel utterances, which clearly the Chinese room cannot. But if you think the Chinese room does speak Chinese, then Searle's answer shouldn't be persuasive to you. Indeed, it seems kind of wierd to me that a thinker as sophisticated as Searle would give such a reply.

    ReplyDelete
  3. Great fucking job bro. Coincidence that someone asked you about the Chinese room in the same week that my summer Philosophy of Mind class had to read about Turing and Searle's thought experiment.

    ReplyDelete
  4. Hold on a fucking second there. All bros are taught the multiplication table in school. Now, does a bro understand multiplication after he learns the table? Fuck no, however we still agree that he knows multiplication.
    There's a whole lot of fucking things we use that we have no deeper understanding of and so what? After all, replacing the multiplication table with an algorithm for multiplication makes no difference, it's just replacing a bunch of information with another bunch of information. Sure, using the new information to get the same output requires more processing power, but in the end, it's just a way of saving space, rather than a completely different way of doing things.

    The same way, the bro with the memorized tables for chinese, for all intents and purposes "knows" Chinese. If you replace the tables in his memory by say, learning fucking Chinese, then the bro now "understands" Chinese, however all you really did was replacing a bunch of data with another bunch of data. Trading storage for processing power does not a conscious mind make.

    ReplyDelete
  5. Yea what about what the Anonymous bro said, whose to say our minds don't function as tables just like the robot's?

    ReplyDelete
  6. David Chalmers has a pretty interesting take. He argues that since it's impossible to scientifically determine whether something is conscious or not and thus and then suggests that the way to include consciousness in a worldview is to define it into the model. He then argues that anything that can process information automatically has consciousness, including, to some degree, thermometers.

    ReplyDelete
  7. That was excellent. A lovely thought experiment, too.

    ReplyDelete
  8. "...maybe that 'emotion' is just a byproduct of the way the brain processes things. This turns the thought experiment back on its head - what if the instructions are just so detailed that we only feel emotions because they're part of the instructions?"

    Does this view admit qualia?

    ReplyDelete
  9. @ Anonymous there, once again, it's about the ability to generate novel responses. Memorising a multiplication table is fine for questions on the table, but when you need to know 12.3*261.84, a multiplication table is unlikely to help much. Far more useful is knowing the mechanics. In the same way, if the Chinese Room suddenly has to deal with a new technology that was not invented when the original tables were written up, it's going to have trouble talking intelligently, compared to a person who actually knows Chinese.

    A chimp using sign language is more capable to generating novel utterances than the Chinese room.

    ReplyDelete
  10. His answer to everything is "Syntax cannot create semantics" , but camon bro, what are the reasons for this? Look at the latest technology in natural language processing. Seems like if a computer can associate some symbols with other symbols (defining words, or at least associating them with others) and use them appropriately then it can have understanding. The technology sucks right now, but theoretically it seems plausible. It requires a lot more complicated symbol pushing than having a magic book with a series of if-statements, but he seems to have provided no reasons to jump from the chinese room -type algorithm doesn't create semantics to "syntax doesn't create semantics"

    ReplyDelete
  11. My only problem is the simplicity of the experiment. While it's simple and that's a thing that I like about it, this is assuming the now and not the later. If artificial intelligence gets to the point of learning, which is in nature simple input-output, what's saying that's not true? If they do everything that we as people do, especially at the base nature of the act itself, when dealing with the aspect of the mind, how is that not a pretty damned good human being, even if it's a replica?

    ReplyDelete
  12. Learned about this in computer power and human reason this past semester. Great class and good summary. A a tough one to try to both prove or rebut.

    ReplyDelete
  13. Chinese Room? Cheaty Room, more like. It's a rhetorical trick unworthy of a bro. He begs the question by having a bro (who TOTALLY IS SENTIENT AND CONSCIOUS) doing a mechanical function.

    However the ultimate conclusion (that free will is basically an illusion) is pretty accurate, it's just not the conclusion he thought it would produce going in.

    We are all Chinese Rooms.

    ReplyDelete
  14. Love this site!
    I'm glad I found it and I can't wait to share it others.

    ReplyDelete
  15. something that always bugged me about this thought experiment is contingent statements. it's fine to have a table for definitions and stuff, but what if the Chinese bros asked the box "what was the last thing I said to you?" pretty easy question for us 'concious' folks, but now the table for symbols needs to be infinitely large to fit all the iterations of that one question. In short, you need to have a fucked up reality for this box to exist that any conclusion you make from it bears nothing meaningful for us bros here where infinite tables and infinite memories are silly ideas. is contingent statements. it's fine to have a table for definitions and stuff, but what if the Chinese bros asked the box "what was the last thing I said to you?" pretty easy question for us 'concious' folks, but now the table for symbols needs to be infinitely large to fit all the iterations of that one question. In short, you need to have a fucked up reality for this box to exist that any conclusion you make from it bears nothing meaningful for us bros here where infinite tables and infinite memories are silly ideas.

    ReplyDelete
  16. Dude, that reminds me of my thoughts while going over psychodynamic development in my psych class. A baby is born with natural instincts, for example, place something in it's palm and the baby instinctively grabs and holds with surprising strength (a safety mechanism pointing to our "ape days" perhaps?) and also when their lips come in contact to something they pucker, salivate, and start sucking. Input outpu much?
    Maybe through social development as we grow older our list of "instinctive responses" grows bigger and much more complex. That would be why they're so many different responses and expectations throughout various cultures.
    But even if that were true, that wouldn't necessarily mean that that's all there is too it. We can have both complex instinctive responses (ie- parasympathetic, involuntary reflexes, and so forth) and also an awareness that separates us from machines. This would explain the shocking actions that we do that turn out to surprises even us, such as giving ones life for another. No instinct I've heard of would lead to self-destruction directly.

    ReplyDelete