Part of the problem with the usual LW position on this is that it is based on two mistakes:
1) Eliezer’s mistaken idea that good and evil are arbitary in themselves, and therefore to be judged by human preference alone.
2) Eliezer’s excessive personal preference for life (e.g. his claim that he expects that his extrapolated preference would accept the lifespan dilemma deal, even though such acceptance guarantees instant death.)
These two things lead him to judge the matter by his excessive personal preference for life, and therefore to draw the erroneous conclusion that living forever is important.
Good and evil are not arbitrary, and have something to do with what is and what can be. In particular, what cannot be, cannot be good. But living forever cannot be. Therefore living forever is not good, and should not be desired. In a sense this is similar to saying that hoping to win the lottery is a waste of hope, because you won’t actually win. The difference is that it is at least possible to win the lottery, whereas it is entirely impossible to live forever.
If I remember correctly, Eliezer has a preference for living a very long time, even if literal infinity turns out to be impossible in our universe’s physics. (Until the stars go out, etc.) Living for a very long time, for example on a scale of thousands or millions of years, is not impossible in principle. Of course it may require changes to human body on many levels (repairing cells, increasing brain capacity), but that kind of goes automatically with transhumanism.
My understanding is that “good” literally is (coherently extrapolated) human preference. But human preference is not completely arbitrary, because it was shaped by evolution. I would expect that most intelligence species shaped by evolution would e.g. in general prefer life over death, or pleasure over pain. Some other values may reflect our biological path; for example the preference for friendship or love—I can imagine that for e.g. a superintelligent spider, a universe where everyone hates everyone but for purely game-theoretical reasons the major players still cooperate to achieve win/win outcomes, could seem perfectly “good”.
Part of the problem with the usual LW position on this is that it is based on two mistakes:
1) Eliezer’s mistaken idea that good and evil are arbitary in themselves, and therefore to be judged by human preference alone.
2) Eliezer’s excessive personal preference for life (e.g. his claim that he expects that his extrapolated preference would accept the lifespan dilemma deal, even though such acceptance guarantees instant death.)
These two things lead him to judge the matter by his excessive personal preference for life, and therefore to draw the erroneous conclusion that living forever is important.
Good and evil are not arbitrary, and have something to do with what is and what can be. In particular, what cannot be, cannot be good. But living forever cannot be. Therefore living forever is not good, and should not be desired. In a sense this is similar to saying that hoping to win the lottery is a waste of hope, because you won’t actually win. The difference is that it is at least possible to win the lottery, whereas it is entirely impossible to live forever.
If I remember correctly, Eliezer has a preference for living a very long time, even if literal infinity turns out to be impossible in our universe’s physics. (Until the stars go out, etc.) Living for a very long time, for example on a scale of thousands or millions of years, is not impossible in principle. Of course it may require changes to human body on many levels (repairing cells, increasing brain capacity), but that kind of goes automatically with transhumanism.
My understanding is that “good” literally is (coherently extrapolated) human preference. But human preference is not completely arbitrary, because it was shaped by evolution. I would expect that most intelligence species shaped by evolution would e.g. in general prefer life over death, or pleasure over pain. Some other values may reflect our biological path; for example the preference for friendship or love—I can imagine that for e.g. a superintelligent spider, a universe where everyone hates everyone but for purely game-theoretical reasons the major players still cooperate to achieve win/win outcomes, could seem perfectly “good”.