I think (and I’m not doing a short version of Eliezer’s essay because I can’t do it justice) that part of what’s going on is that people have to make decisions based on seriously incomplete information all the time, and do. People build and modify governments, get married, and build bridges, all without a deep understanding of people or matter—and they need to make those decisions. There’s enough background knowledge and a sufficiently forgiving environment that there’s an adequate chance of success, and some limitation to the size of disasters.
What Eliezer missed in 1997 was that AI was a special case which could only be identified by applying much less optimism than is appropriate for ordinary life.
I think (and I’m not doing a short version of Eliezer’s essay because I can’t do it justice) that part of what’s going on is that people have to make decisions based on seriously incomplete information all the time, and do. People build and modify governments, get married, and build bridges, all without a deep understanding of people or matter—and they need to make those decisions. There’s enough background knowledge and a sufficiently forgiving environment that there’s an adequate chance of success, and some limitation to the size of disasters.
What Eliezer missed in 1997 was that AI was a special case which could only be identified by applying much less optimism than is appropriate for ordinary life.