It would be nice if more people would think about the fate of humans in a world which does not care for them.
That is a pretty bad scenario, and many people seem to think that human beings would just have their atoms recycled in that case. As far as I can tell, that seems to be mostly because that is the party line around here.
Universal Instrumental Values which favour preserving the past may well lead to preservation of humans. More interesting still is the hypothesis that our descendants would be especially interested in 20th-century humans—due to their utility in understanding aliens—and would repeatedly simulate or reenact the run up to superintelligence—to see what the range of possible outcomes is likely to be. That might explain some otherwise-puzzling things.
It’s the party line at LW maybe, but not SingInst. 21st century Earth is a huge attractor for simulations of all kinds. I’m rather interested in coarse simulations of us run by agents very far away in the wave function or in algorithmspace. (Timelessness does weird things, e.g. controlling non-conscious models of yourself that were computed in the “past”.) Also, “controlling” analogous algorithms is pretty confusing.
It’s the party line at LW maybe, but not SingInst.
If so, they keep pretty quet about it! I expect for them it would be “more convenient” if those superintelligences whose ultimate values did not mention humans would just destroy the world. If many of them would be inclined to keep some humans knocking around, that dilutes the “save the world” funding pitch.
I think it’s epistemicly dangerous to guess at the motivations of “them” when there are so few people and all of them have diverse views. There are only a handful of Research Fellows and it’s not like they have blogs where they talk about these things. SingInst is still really small and really diverse.
It would be nice if more people would think about the fate of humans in a world which does not care for them.
That is a pretty bad scenario, and many people seem to think that human beings would just have their atoms recycled in that case. As far as I can tell, that seems to be mostly because that is the party line around here.
Universal Instrumental Values which favour preserving the past may well lead to preservation of humans. More interesting still is the hypothesis that our descendants would be especially interested in 20th-century humans—due to their utility in understanding aliens—and would repeatedly simulate or reenact the run up to superintelligence—to see what the range of possible outcomes is likely to be. That might explain some otherwise-puzzling things.
It’s the party line at LW maybe, but not SingInst. 21st century Earth is a huge attractor for simulations of all kinds. I’m rather interested in coarse simulations of us run by agents very far away in the wave function or in algorithmspace. (Timelessness does weird things, e.g. controlling non-conscious models of yourself that were computed in the “past”.) Also, “controlling” analogous algorithms is pretty confusing.
If so, they keep pretty quet about it! I expect for them it would be “more convenient” if those superintelligences whose ultimate values did not mention humans would just destroy the world. If many of them would be inclined to keep some humans knocking around, that dilutes the “save the world” funding pitch.
I think it’s epistemicly dangerous to guess at the motivations of “them” when there are so few people and all of them have diverse views. There are only a handful of Research Fellows and it’s not like they have blogs where they talk about these things. SingInst is still really small and really diverse.
Right—so, to be specific, we have things like this:
I think I have to agree with the Europan Zugs in disagreeing with that.