News and commentary from the cross-platform scripting community.
But does it think? By Wesley Felter, email@example.com.
Deep Blue beat Kasparov at chess, and everyone asks, "Is this significant?" The question of whether humans or computers are better at chess has now been aswered, but the question of whether this matters hasn't.
But does it think?
In Deep Blue's case, the answer is no. Computers are getting faster more rapidly than humans are getting smarter, so at some point the lines had to cross. Chess is just a math problem, although a very large one. It's possible to argue that most things are just math (or set theory) problems; for many of them we just haven't been able to accurately define the problem yet.
Years ago, a noted AI researcher named Doug Lenat came up with a program called AM, for Automated Mathematician. AM figured out, by itself, theorems about numbers. It didn't just solve math problems, it found them in the first place. As revolutionary as that is, AM never came up with Godel's Incompleteness Theorem, which proves that there are math problems that are unsolvable. There's a question: could a machine ever come up with the idea of an unsolvable problem? I don't know.
But solving math problems isn't thinking. I don't know what thinking is, but something makes me believe that there's more to it.
While I can confidently say that Deep Blue isn't a challenge to the intellectual superiority of humans, I'm less sure about other systems like Cyc. Does it think? It sure looks like it does. Maybe we should ask it.