Part of the problem with the usual LW position on this is that it is based on two mistakes:
1) Eliezer’s mistaken idea that good and evil are arbitary in themselves, and therefore to be judged by human preference alone.
2) Eliezer’s excessive personal preference for life (e.g. his claim that he expects that his extrapolated preference would accept the lifespan dilemma deal, even though such acceptance guarantees instant death.)
These two things lead him to judge the matter by his excessive personal preference for life, and therefore to draw the erroneous conclusion that living forever is important.
Good and evil are not arbitrary, and have something to do with what is and what can be. In particular, what cannot be, cannot be good. But living forever cannot be. Therefore living forever is not good, and should not be desired. In a sense this is similar to saying that hoping to win the lottery is a waste of hope, because you won’t actually win. The difference is that it is at least possible to win the lottery, whereas it is entirely impossible to live forever.
I think this post is basically correct. You don’t, however, give an argument that most minds would behave this way. However, here is a brief intuitive argument for it. A “utility function” does not mean something that is maximized in the ordinary sense of maximize; it just means “what the thing does in all situations.” Look at computers: what do they do? In most situations, they sit there and compute things, and do not attempt to do anything in particular in the world. If you scale up their intelligence, that will not necessarily change their utility function much. In other words, it will lead to computers that mostly sit there and compute, without trying to do much in the world. That is to say, AIs will be weakly motivated. Most humans are weakly motivated, and most of the strength of their motivation does not come from intelligence, but from the desires that came from evolution. Since AIs will not have that evolution, they will be even more weakly motivated than humans, assuming a random design.