I definitely agree that information theoretic extinction is unlikely. I think that basically no one immediately realizes that’s what you’re talking about though, ’cuz that’s not how basically anyone else uses the word “extinction” [...]
So: I immediately went on to say:
I figure, it is most likely that there will be instantiated humans around, though.
That is the same use of “extinction” that everybody else uses. This isn’t just a silly word game about what the term “extinction” means.
I still don’t think people feel like it’s the same for some reason. Maybe I’m wrong. I just thought I’d perceived unjustified dismissal of some of your comments a while back and wanted to diagnose the problem.
It would be nice if more people would think about the fate of humans in a world which does not care for them.
That is a pretty bad scenario, and many people seem to think that human beings would just have their atoms recycled in that case. As far as I can tell, that seems to be mostly because that is the party line around here.
Universal Instrumental Values which favour preserving the past may well lead to preservation of humans. More interesting still is the hypothesis that our descendants would be especially interested in 20th-century humans—due to their utility in understanding aliens—and would repeatedly simulate or reenact the run up to superintelligence—to see what the range of possible outcomes is likely to be. That might explain some otherwise-puzzling things.
It’s the party line at LW maybe, but not SingInst. 21st century Earth is a huge attractor for simulations of all kinds. I’m rather interested in coarse simulations of us run by agents very far away in the wave function or in algorithmspace. (Timelessness does weird things, e.g. controlling non-conscious models of yourself that were computed in the “past”.) Also, “controlling” analogous algorithms is pretty confusing.
It’s the party line at LW maybe, but not SingInst.
If so, they keep pretty quet about it! I expect for them it would be “more convenient” if those superintelligences whose ultimate values did not mention humans would just destroy the world. If many of them would be inclined to keep some humans knocking around, that dilutes the “save the world” funding pitch.
I think it’s epistemicly dangerous to guess at the motivations of “them” when there are so few people and all of them have diverse views. There are only a handful of Research Fellows and it’s not like they have blogs where they talk about these things. SingInst is still really small and really diverse.
So: I immediately went on to say:
That is the same use of “extinction” that everybody else uses. This isn’t just a silly word game about what the term “extinction” means.
I still don’t think people feel like it’s the same for some reason. Maybe I’m wrong. I just thought I’d perceived unjustified dismissal of some of your comments a while back and wanted to diagnose the problem.
It would be nice if more people would think about the fate of humans in a world which does not care for them.
That is a pretty bad scenario, and many people seem to think that human beings would just have their atoms recycled in that case. As far as I can tell, that seems to be mostly because that is the party line around here.
Universal Instrumental Values which favour preserving the past may well lead to preservation of humans. More interesting still is the hypothesis that our descendants would be especially interested in 20th-century humans—due to their utility in understanding aliens—and would repeatedly simulate or reenact the run up to superintelligence—to see what the range of possible outcomes is likely to be. That might explain some otherwise-puzzling things.
It’s the party line at LW maybe, but not SingInst. 21st century Earth is a huge attractor for simulations of all kinds. I’m rather interested in coarse simulations of us run by agents very far away in the wave function or in algorithmspace. (Timelessness does weird things, e.g. controlling non-conscious models of yourself that were computed in the “past”.) Also, “controlling” analogous algorithms is pretty confusing.
If so, they keep pretty quet about it! I expect for them it would be “more convenient” if those superintelligences whose ultimate values did not mention humans would just destroy the world. If many of them would be inclined to keep some humans knocking around, that dilutes the “save the world” funding pitch.
I think it’s epistemicly dangerous to guess at the motivations of “them” when there are so few people and all of them have diverse views. There are only a handful of Research Fellows and it’s not like they have blogs where they talk about these things. SingInst is still really small and really diverse.
Right—so, to be specific, we have things like this:
I think I have to agree with the Europan Zugs in disagreeing with that.