If anyone can give me the cliff’s notes to this, I’d be appreciative. I am a big LW fan but aside from the obsession with the Singularity, I more or less stand at Eliezer1997′s mode of thinking. Furthermore, making clever plans to work around the holes in your thinking seems like the wholly rational thing to do—in fact, this entire post seems like a direct counterargument to The Proper Use of Doubt: http://lesswrong.com/lw/ib/the_proper_use_of_doubt/
I think (and I’m not doing a short version of Eliezer’s essay because I can’t do it justice) that part of what’s going on is that people have to make decisions based on seriously incomplete information all the time, and do. People build and modify governments, get married, and build bridges, all without a deep understanding of people or matter—and they need to make those decisions. There’s enough background knowledge and a sufficiently forgiving environment that there’s an adequate chance of success, and some limitation to the size of disasters.
What Eliezer missed in 1997 was that AI was a special case which could only be identified by applying much less optimism than is appropriate for ordinary life.
Working around the holes in your thinking is all well and good until you see a problem where getting the correct answer is important. At some point, you have to determine the impact of the holes on your predictions, and that can’t be done if you work around them.
If anyone can give me the cliff’s notes to this, I’d be appreciative. I am a big LW fan but aside from the obsession with the Singularity, I more or less stand at Eliezer1997′s mode of thinking. Furthermore, making clever plans to work around the holes in your thinking seems like the wholly rational thing to do—in fact, this entire post seems like a direct counterargument to The Proper Use of Doubt: http://lesswrong.com/lw/ib/the_proper_use_of_doubt/
“The Proper Use of Doubt” doesn’t suggest working around the holes in your thinking. It suggests filling them in.
I think (and I’m not doing a short version of Eliezer’s essay because I can’t do it justice) that part of what’s going on is that people have to make decisions based on seriously incomplete information all the time, and do. People build and modify governments, get married, and build bridges, all without a deep understanding of people or matter—and they need to make those decisions. There’s enough background knowledge and a sufficiently forgiving environment that there’s an adequate chance of success, and some limitation to the size of disasters.
What Eliezer missed in 1997 was that AI was a special case which could only be identified by applying much less optimism than is appropriate for ordinary life.
Working around the holes in your thinking is all well and good until you see a problem where getting the correct answer is important. At some point, you have to determine the impact of the holes on your predictions, and that can’t be done if you work around them.