I think I see where the disconnect was in this conversation. Lanier was accusing general AI people of being religious. Yudkowsky took that as a claim that something he believed was false, and wanted Lanier to say what.
But Lanier wasn’t saying anything in particular was false. He was saying that when you tackle these Big Problems, there are necessarily a lot of unknowns, and when you have too many unknowns reason and science are inapplicable. Science and reason work best when you have one unknown and lots of knowns. If you try to bite off too big a chunk at once you end up reasoning in a domain that is now only e.g. 50% fact, and that reminds him of the “reasoning” of religious people.
Knowledge is a big interconnected web, with each fact reinforcing the others. You have to grow it from the edge. And our techniques are design for edge space.
I think I see where the disconnect was in this conversation. Lanier was accusing general AI people of being religious. Yudkowsky took that as a claim that something he believed was false, and wanted Lanier to say what.
But Lanier wasn’t saying anything in particular was false. He was saying that when you tackle these Big Problems, there are necessarily a lot of unknowns, and when you have too many unknowns reason and science are inapplicable. Science and reason work best when you have one unknown and lots of knowns. If you try to bite off too big a chunk at once you end up reasoning in a domain that is now only e.g. 50% fact, and that reminds him of the “reasoning” of religious people.
Knowledge is a big interconnected web, with each fact reinforcing the others. You have to grow it from the edge. And our techniques are design for edge space.