Philosophy Bro explains complex ideas of philosophy in easy to understand language, created by Tommy Maranges, the author of Descartes' Meditations, Bro.

Mailbag Monday: The Chinese Room

Mailbag Monday: A weekly segment that covers readers’ questions and concerns about all things Philosophy, Bro, and Philosophy Bro that don’t quite fit anywhere else. Send your questions to philosophybro@gmail.com with ‘Mailbag Monday’ in the subject line.


Steven writes,

What’s good Philosophy Bro?
What are your thoughts on the Chinese room experiment by John Searle? Seems like a problem that only PB could really get to the root of.

Steven, if it weren’t summer I’d think you’re trying to get me to write an essay for you, but today’s your lucky day. The Chinese Room thought experiment is one of the most important thought experiments in philosophy of mind, especially in the project of duplicating intelligence. It’s interesting as hell, too, which definitely helps your case.


Before I get into the experiment itself, here’s the problem Searle is addressing - what does it mean for a machine to understand something? Is it possible we will ever produce a machine that can think on its own, that can learn and truly understand English? Offhand, it seems possible; after all, if we build a robot that speaks English as well as we do, who understands the nuances of language, who we just can’t tell is a robot, then what exactly is the difference between his claim to know English and your best friend’s claim toknow English? This is the basis of the famous 'Turing Test’ for AI - when we reach the point where we can no longer distinguish between a human and a computer when speaking to them, then we have truly achieved Artificial Intelligence.

That makes a lot of sense. It’s closely related to the problem of other minds - if we can’t even be sure what other people thinking looks like, how the fuck can we say anything about what it means for a robot to think? Entire worlds of science fiction are built around robots who mirror humanity in all the important ways - they act hurt when they should act hurt, they act sad when they should act sad, and they can tellhilarious jokes. And as we get better at simulating systems, some bros have claimed that it won’t be long that computers have minds in exactly the same sense as we do. Maybe we can replace Dane Cook with a robot that’s exactly like him, except funny. Wouldn’t that be nice?

John Searle is not amused with these shenanigans. So, for the uninitiated, the Chinese Room thought experiment runs something like this:

A bro who doesn’t know Chinese is in a room with a bunch of files, some pieces of paper and some pencils and two slots on the wall: “In” and “Out”. The files are “In” and “Out” tables that have Chinese symbols in them. When a Chinese symbol comes in the “In” slot, the bro looks up the symbol in a table, sketches out the corresponding “Out” symbol on a piece of paper, and feeds it through the “Out” slot.


That’s it. Some people who do know Chinese are asking this bro some questions, and he’s answering them correctly, even though he has no fucking clue what he’s reading or replying. Now, can we say that this bro knows Chinese? Searle says, “Hell no.” And it’s a pretty convincing argument - this chump is just drawing pictures and passing them out. Someone could have just asked him why he’s such an asshole all the time, and he replied, appropriately, “I’m sure I don’t know what you mean. Sorry to have offended you.” And he’snot sorry at all, because he has no idea he offended anyone. He’s just, you know, a guy in a room doin’ his job.

And that’s all machines are really doing - consulting tables of instructions, and outputting what they’re told to output. That’s all a machine (in the proper sense of a Turing Machine) ever does. Sure, maybe itlooks like the robot is actually thinking, or even having emotions - just like if someone fed “your wife died today” into the Chinese room, he would receive a truly heart-wrenching elegy in return - but all that’s really happening is he’s copying over some symbols. When he’s fed that elegy back through the slot, our intrepid hero will kick back and go back to his crossword puzzle with not a hint of emotion.

Some bros suggest that somehow either the man or some other system that includes the man, like everything in the room, does compose a mind that understands Chinese after all, but Searle just says, “So let the bro memorize the tables. Now you have to say that even though he doesn’t know Chinese, him plus the paper and pencil do. Which is just fucking gibberish. Truly embarrassing.”

But of course, even if everything about the experiment functions as described, it’s not an open and shut case. Go figure. The 'standard’ interpretation, which is to say, Searle’s interpretation, is that there is more than just syntax to the mind; we can’t duplicate it through complicated formalisms. But what if instead of trying to say computers have minds, we turn it around and say that minds aren’t as complicated as we thought? What if the experiment is a perfect demonstration of something else? This is the route that Daniel Dennett takes; rather than saying that the experiment is flawed in some way in design, which is a tall fucking order thanks to its elegant simplicity, he points out a crucial assumption of Searle’s: that the mind itself has understanding in a way different than the Chinese room.

What if what we call 'consciousness’ is an illusion, a red herring? You know what it feels like to think something, right? To feel an emotion? If you found out you won the lottery, your heart rate would accelerate, you’d sweat a lot, and you’d have the urge to jump for joy. Maybe you’d cry. You’d be forgiven - it’s a lot of fucking money. Except all of those things are traceable to physical processes, as is everyoutput you produce to any input. It feels like something special to know what someone says in a way that the man in the room doesn’t seem to know - when someone tells you that you won the lottery, you don’t just sketch a corresponding symbol that says, “FUCK YES,” you genuinely feel that, right? Well, maybe. Or maybe that 'emotion’ is just a byproduct of the way the brain processes things. This turns the thought experiment back on its head - what if the instructions are just so detailed that we only feel emotions because they’re part of the instructions? Are you sure what we want to call 'consciousness’ isn’t just more detailed steps?

Of course, that argument has some assumptions too - for example, that our thoughts are indeed traceable to deterministic, mechanistic processes, for example. And, while Dennett postulates a working model of the mind that makes consciousness an illusion, it’s not clear that his model is the right one - that experience we call consciousness is pretty fucking convincing.

As usual, there’s a lot more to this question than I can cover here. Isn’t this fun?

You can read Searle’s draft of the original paper online: Minds, Brains, and Programs [PDF]

Dennett discusses the Chinese Room and other thought experiments in philosophy of mind in his excellent Consciousness Explained.

The Wikipedia page on the Chinese Room has a good explanation of the room and goes into much more detail about various objections or reframings of the experiment. It also details all the ways John Searleholds the fucking line in the face of some complicated objections. You should check it out.

Mailbag Monday: What is the Greater Good?

Mailbag Monday: Transhumanism and Personhood