No book reviews this month.
Marcus du Sautoy, writing on BBC News, brings up Searle's Chinese Room in Can computers have true artificial intelligence?
Searle's argument is that someone who speaks no Chinese exchanges notes with a native speaker through a system that informs him which note to respond with. The Chinese speaker think's he's having a conversation but the subject of the experiment doesn't understand a word of it. It's a variant of the Turing Test and while the 'room' passes the test the lack of understanding on the part of the subject means that Artificial Intelligence is impossible. The BBC even put together a three part illustration to help you understand.
I learned about the room at university and I didn't fall for it then. Du Sautoy, to be fair, expresses some skepticism but it makes up about a third of an article on AI, which is unforgivable.
In determining if Searle's room is intelligent or not you must consider the entire system, including the note passing mechanism. The person operating the room might not understand Chinese but the room as a whole does. The Chinese room is like saying a person isn't intelligent if their elbow fails to get a joke. It's the AI equivalent of Maxwell's demon, a 19th century attempt to circumvent the second law of thermodynamics.
Every time you get a Deep Thought or a Watson the debate about the possibility of strong AI (as in just I) resurfaces. It's not a technical question, it's a religious one. If you believe we're intelligent for supernatural reasons then it's valid to wonder if AI is possible (and you might want to stop reading now). If not then the fact that we exist means that AI might be difficult, but it's not impossible and almost certainly inevitable.
The problem is that teams at IBM and Google cook up very clever solutions in a limited domain and them people get excited that a chess computer or a trivia computer can eventually 'beat' a human at one tiny thing.
Human intelligence wasn't carefully designed, it's the slow accretion of many tiny hacks, lucky accidents that made us slowly smarter over time. If we want this type of intelligence it's highly likely that we're going to have to grow it rather than invent it. And when true AI finally arrives I'll bet that we won't understand it any better than the organic kind.
Previously: At the CHM...
Photo Credit: Stuck in Customs cc
California just canceled a 2 billion dollar project to link 58 courts having spent over half a billion. In the UK half a billion pounds was wasted failing to develop software for the emergency services. A recent although controversial study estimates global IT failures cost 6 trillion dollars a year.
I've thought about this before but perhaps the time is right. It's a software system that analyzes the chances of success for any major IT project. In California in particular we could pass a ballot measure to mandate that this system is used accept or reject any software project that would cost the state more than, say, $50k. The core of the system has already been written and looks like this:
Thread.Sleep(10000); // look like you're doing something
Console.WriteLine("No! Use Google Docs instead."); // reject proposal
All it needs is a nice interface that allows you to upload documents and then show a progress bar while the in-depth 'analysis' takes place. I'd be willing to do this work for the state for no more than $200 million, plus costs and change orders. Shouldn't take more than a decade either.
Governor Brown, call me.
Image Credit: Images_of_Money cc