Still, I think Hanson is poking at something important and uncomfortable. In particular: suppose we grant him the empirics. Suppose, indeed, that even without AI, the default values of future humans would “drift” until they were as paperclippers relative to us, such that the world they create would be utterly valueless from our perspective. What follows? Well, umm, if you care about the future having value … then what follows is a need to exert more control. More yang. It is, indeed, the “good future’ part of the alignment problem all over again (though not the “notkilleveryone” part).
I recently wrote a post discussing exactly that dilemma (allowing for the fact that technologies such as genetic engineering and cyborging will make human values much more mutable): The Mutable Values Problem in Value Learning and CEV, as part of my AI, Alignment, and Ethics sequence.