Searle's argument is that someone who speaks no Chinese exchanges notes with a native speaker through a system that informs him which note to respond with. The Chinese speaker think's he's having a conversation but the subject of the experiment doesn't understand a word of it. It's a variant of the Turing Test and while the 'room' passes the test the lack of understanding on the part of the subject means that Artificial Intelligence is impossible. The BBC even put together a three part illustration to help you understand.
I learned about the room at university and I didn't fall for it then. Du Sautoy, to be fair, expresses some skepticism but it makes up about a third of an article on AI, which is unforgivable.
In determining if Searle's room is intelligent or not you must consider the entire system, including the note passing mechanism. The person operating the room might not understand Chinese but the room as a whole does. The Chinese room is like saying a person isn't intelligent if their elbow fails to get a joke. It's the AI equivalent of Maxwell's demon, a 19th century attempt to circumvent the second law of thermodynamics.
Every time you get a Deep Thought or a Watson the debate about the possibility of strong AI (as in just I) resurfaces. It's not a technical question, it's a religious one. If you believe we're intelligent for supernatural reasons then it's valid to wonder if AI is possible (and you might want to stop reading now). If not then the fact that we exist means that AI might be difficult, but it's not impossible and almost certainly inevitable.
The problem is that teams at IBM and Google cook up very clever solutions in a limited domain and them people get excited that a chess computer or a trivia computer can eventually 'beat' a human at one tiny thing.
Human intelligence wasn't carefully designed, it's the slow accretion of many tiny hacks, lucky accidents that made us slowly smarter over time. If we want this type of intelligence it's highly likely that we're going to have to grow it rather than invent it. And when true AI finally arrives I'll bet that we won't understand it any better than the organic kind.
Previously: At the CHM...
- OpenAGI, or why we shouldn't trust Open AI to protect us from the Singularity
- Can I move to a Better Simulation Please?