Okay, yeah, I definitely agree that information theoretic extinction is unlikely. I think that basically no one immediately realizes that’s what you’re talking about though, ‘cuz that’s not how basically anyone else uses the word “extinction”; they mostly imagine the naive all-humans-die-in-fiery-blast scenario, and when you say you don’t think that will happen, they’re like, of course that will happen, but what you really mean is a non-obvious thing about information value and simulations and stuff. So I guess you’re implicitly saying “if you’re too uncharitable to guess what credible thing I’m trying to say, that’s your problem”? I’m mostly asking ’cuz I do the same thing, but find that it generally doesn’t work; there’s no real audience, alas.
we get wiped out by aliens.
Any aliens that wipe us out would have to be incredibly advanced, in which case they probably won’t throw away their game theoretic calculations. Especially if they’re advanced enough to be legitimately concerned about acausal game theory. And they’d have to do that within the next century or so, or else they’ll only find posthumans, in which case they’re definitely going to learn a thing or two about humanity. (Unless superintelligence goal systems are convergent somehow.)
I definitely agree that information theoretic extinction is unlikely. I think that basically no one immediately realizes that’s what you’re talking about though, ’cuz that’s not how basically anyone else uses the word “extinction” [...]
So: I immediately went on to say:
I figure, it is most likely that there will be instantiated humans around, though.
That is the same use of “extinction” that everybody else uses. This isn’t just a silly word game about what the term “extinction” means.
I still don’t think people feel like it’s the same for some reason. Maybe I’m wrong. I just thought I’d perceived unjustified dismissal of some of your comments a while back and wanted to diagnose the problem.
It would be nice if more people would think about the fate of humans in a world which does not care for them.
That is a pretty bad scenario, and many people seem to think that human beings would just have their atoms recycled in that case. As far as I can tell, that seems to be mostly because that is the party line around here.
Universal Instrumental Values which favour preserving the past may well lead to preservation of humans. More interesting still is the hypothesis that our descendants would be especially interested in 20th-century humans—due to their utility in understanding aliens—and would repeatedly simulate or reenact the run up to superintelligence—to see what the range of possible outcomes is likely to be. That might explain some otherwise-puzzling things.
It’s the party line at LW maybe, but not SingInst. 21st century Earth is a huge attractor for simulations of all kinds. I’m rather interested in coarse simulations of us run by agents very far away in the wave function or in algorithmspace. (Timelessness does weird things, e.g. controlling non-conscious models of yourself that were computed in the “past”.) Also, “controlling” analogous algorithms is pretty confusing.
It’s the party line at LW maybe, but not SingInst.
If so, they keep pretty quet about it! I expect for them it would be “more convenient” if those superintelligences whose ultimate values did not mention humans would just destroy the world. If many of them would be inclined to keep some humans knocking around, that dilutes the “save the world” funding pitch.
I think it’s epistemicly dangerous to guess at the motivations of “them” when there are so few people and all of them have diverse views. There are only a handful of Research Fellows and it’s not like they have blogs where they talk about these things. SingInst is still really small and really diverse.
Okay, yeah, I definitely agree that information theoretic extinction is unlikely. I think that basically no one immediately realizes that’s what you’re talking about though, ‘cuz that’s not how basically anyone else uses the word “extinction”; they mostly imagine the naive all-humans-die-in-fiery-blast scenario, and when you say you don’t think that will happen, they’re like, of course that will happen, but what you really mean is a non-obvious thing about information value and simulations and stuff. So I guess you’re implicitly saying “if you’re too uncharitable to guess what credible thing I’m trying to say, that’s your problem”? I’m mostly asking ’cuz I do the same thing, but find that it generally doesn’t work; there’s no real audience, alas.
Any aliens that wipe us out would have to be incredibly advanced, in which case they probably won’t throw away their game theoretic calculations. Especially if they’re advanced enough to be legitimately concerned about acausal game theory. And they’d have to do that within the next century or so, or else they’ll only find posthumans, in which case they’re definitely going to learn a thing or two about humanity. (Unless superintelligence goal systems are convergent somehow.)
So: I immediately went on to say:
That is the same use of “extinction” that everybody else uses. This isn’t just a silly word game about what the term “extinction” means.
I still don’t think people feel like it’s the same for some reason. Maybe I’m wrong. I just thought I’d perceived unjustified dismissal of some of your comments a while back and wanted to diagnose the problem.
It would be nice if more people would think about the fate of humans in a world which does not care for them.
That is a pretty bad scenario, and many people seem to think that human beings would just have their atoms recycled in that case. As far as I can tell, that seems to be mostly because that is the party line around here.
Universal Instrumental Values which favour preserving the past may well lead to preservation of humans. More interesting still is the hypothesis that our descendants would be especially interested in 20th-century humans—due to their utility in understanding aliens—and would repeatedly simulate or reenact the run up to superintelligence—to see what the range of possible outcomes is likely to be. That might explain some otherwise-puzzling things.
It’s the party line at LW maybe, but not SingInst. 21st century Earth is a huge attractor for simulations of all kinds. I’m rather interested in coarse simulations of us run by agents very far away in the wave function or in algorithmspace. (Timelessness does weird things, e.g. controlling non-conscious models of yourself that were computed in the “past”.) Also, “controlling” analogous algorithms is pretty confusing.
If so, they keep pretty quet about it! I expect for them it would be “more convenient” if those superintelligences whose ultimate values did not mention humans would just destroy the world. If many of them would be inclined to keep some humans knocking around, that dilutes the “save the world” funding pitch.
I think it’s epistemicly dangerous to guess at the motivations of “them” when there are so few people and all of them have diverse views. There are only a handful of Research Fellows and it’s not like they have blogs where they talk about these things. SingInst is still really small and really diverse.
Right—so, to be specific, we have things like this:
I think I have to agree with the Europan Zugs in disagreeing with that.