This, along with the simulation argument, is why I’m not too emotionally stressed out with feelings of impending doom that seem to afflict some people familiar with SIAI’s ideas. My subjective anticipation is mollified by the thought that I’ll probably either never experience dying or wake up to find that I’ve been in an ancestral simulation, which leaves the part of me that wants to prevent all the empty galaxies from going to waste to work in peace. :)
Also, we can extend the argument a bit for those worried about “measure”. First, an FAI might be able to recreate people from historical clues (writings, recordings, other’s memories of them, etc.). But suppose that’s not possible. An FAI could still create a very large number of historically plausible people, and assuming FAIs in other Everett branches do the same, the fact that I probably won’t be recreated in this branch will be compensated for by the fact that I’ll be recreated in other branches where I currently don’t exist, thus preserving or even increasing my overall measure.
My subjective anticipation is mollified by the thought that I’ll probably either never experience dying or wake up to find that I’ve been in an ancestral simulation, which leaves the part of me that wants to prevent all the empty galaxies from going to waste to work in peace. :)
Update: Recent events have made me think that the fraction of advanced civilizations in the multiverse that are sane may be quite low. (It looks like our civilization will probably build a superintelligence while suffering from serious epistemic pathologies, and this may be be typical for civilizations throughout the multiverse.) So now I’m pretty worried about “waking up” in some kind of dystopia (powered or controlled by a superintelligence with twisted beliefs or values), either in my own future lightcone or in another universe.
Actually, I probably shouldn’t have been so optimistic even before the recent events...
I updated downward somewhat on the sanity of our civilization, but not to an extremely low value or from a high value. That update justifies only a partial update on the sanity of the average human civilization (maybe the problem is specific to our history and culture), which justifies only a partial update on the sanity of the average civilization (maybe the problem is specific to humans), which justifies only a partial update on the sanity of outcomes (maybe achieving high sanity is really easy or hard). So all things considered (aside from your second paragraph) it doesn’t seem like it justifies, say, doubling the amount of worry about these things.
I agree recent events don’t justify a huge update by themselves if one started with a reasonable prior. It’s more that I somehow failed to consider the possibility of that scenario, the recent events made me consider it, and that’s why it triggered a big update for me.
Now I’m curious. Does studying history make you update in a similar way? I feel that these times are not especially insane compared to the rest of history, though the scale of the problems might be bigger.
Now I’m curious. Does studying history make you update in a similar way?
History is not one of my main interests, but I would guess yes, which is why I said “Actually, I probably shouldn’t have been so optimistic even before the recent events...”
I feel that these times are not especially insane compared to the rest of history, though the scale of the problems might be bigger.
Agreed. I think I was under the impression that western civilization managed to fix a lot of the especially bad epistemic pathologies in a somewhat stable way, and was unpleasantly surprised when that turned out not to be the case.
prevent all the empty galaxies from going to waste
(Off-topic: Is this a decision theoretic thing or an epistemic thing? That is, do you really think the stars are actually out there to pluck in a substantial fraction of possible worlds, or are you just focusing on the worlds where they are there to pluck because it seems like we can’t do nearly as much if the stars aren’t real? Because I think I’ve come up with some good arguments against the latter and was planning on writing a post about it; but if you think the former is the case then I’d like to know what your arguments are, because I haven’t seen any really convincing ones. (Katja suggested that the opposite hypothesis—that superintelligences have already eaten the stars and are just misleading us, or we are in a simulation where the stars aren’t real—isn’t a “simple” hypothesis, but I don’t quite see why that would be.) What’s nice about postulating that the stars are just an illusion is that it means there probably isn’t actually a great filter, and we aren’t left with huge anthropic confusions about why we’re apparently so special.)
do you really think the stars are actually out there to pluck in a substantial fraction of possible worlds
Assuming most worlds start out lifeless like ours, they must have lots of resources for “plucking” until somebody actually plucks them… I guess I’m not sure what you’re asking, or what is motivating the question. Maybe if you explain your own ideas a bit more? It sounds like you’re saying that we may not want to try to pluck the stars that are apparently out there. If so, what should we be trying to do instead?
I guess I didn’t clearly state the relevant hypothesis. The hypothesis is that the stars aren’t real, they’re just an illusion or a backdrop put their by superintelligences so we don’t see what’s actually going on. This would explain the great filter paradox (Fermi’s paradox) and would imply that if we build an AI then that doesn’t necessarily mean it’ll get to eat all the stars. If the stars are out there, we should pluck them—but are they out there? They’re like a stack of twenties on the ground, and it seems plausible they’ve already been plucked without our knowing. Maybe my previous comment will make more sense now. I’m wondering if your reasons for focusing on eating all the galaxies is because you think the galaxies actually haven’t already been eaten, or if it’s because even if it’s probable that they’ve actually already been eaten and our images of them are an illusion, most of the utility we can get is still concentrated in worlds where the galaxies haven’t already been eaten, so we should focus on those worlds. (This is sort of orthogonal to the simulation argument because it doesn’t necessitate that our metaphysical ideas about how simulations work make sense; the mechanism for the illusion works by purely physical means.)
The hypothesis is that the stars aren’t real, they’re just an illusion or a backdrop put their by superintelligences so we don’t see what’s actually going on.
If that’s the case, then I’d like to break out by building our own superintelligence to find and exploit whatever weaknesses might exist in the SIs that are boxing us in, or failing that, negotiate with them for a share of the universe. (Presumably they want something from us, or else why are they doing this?) Does that answer your question?
BTW, I’m interested in the “good arguments” that you mentioned earlier. Can you give a preview of them here?
The hypothesis is that the stars aren’t real, they’re just an illusion or a backdrop put their by superintelligences so we don’t see what’s actually going on. This would explain the great filter paradox (Fermi’s paradox) and would imply that if we build an AI then that doesn’t necessarily mean it’ll get to eat all the stars.
If the SI wanted to fool us, why would it make us see something (a lifeless universe) that would make us infer that we’re being fooled by an SI?
Will, as per amit’s point, how do you anticipate your decision to tell us about the superintelligent fake stars hypothesis the decision of the superintelligences to create (or otherwise cause to exist) human life on earth with the illusion of living in a free universe?
All things considered (and assuming that hypothesis as a premise) I think you might have just unmade us a little bit. How diabolical!
I agree with your rationale, i.e. assuming they’re actually around then the superintelligences clearly aren’t trying that hard to be quiet, and are instead trying to stay on some sort of edge of influence or detectability. Remember, it’s only atheists who don’t suspect supernatural influence; a substantial fraction of humans already suspects weird shit is going on. Not “the chosen few” so much as “the chosen multitude”. Joke’s only on the atheists. Presumably if they wanted us to entirely discount the possibility that they were around then it would be easy for them to influence memetic evolution such that supernatural hypotheses were even less popular, e.g. by subtly keeping the U.S. from entering WWII and thus letting the Soviet Union capture all of Europe, and so on and so forth in that vein. (I’m not a superintelligence, surely they could come up with better memetic engineering strategies than I can.)
Nitpick: they don’t have to have chosen to create or cause us to exist as such, just left us alone. The latter is more likely because of game theoretic asymmetry (“do no harm”). Not sure if you intended that to be in your scope.
Nitpick: they don’t have to have chosen to create or cause us to exist as such, just left us alone. The latter is more likely because of game theoretic asymmetry (“do no harm”). Not sure if you intended that to be in your scope.
The intended scope was inclusive—I didn’t want to go overboard with making the caveat ‘or’ chain mention everything. The difference in actions and inactions when it comes to superintelligences that are controlling everything around us including giving us an entire fake universe to look at become rather meaningless.
or are you just focusing on the worlds where they are there to pluck because it seems like we can’t do nearly as much if the stars aren’t real? Because I think I’ve come up with some good arguments against the latter
Forthcoming in the next year or so, in my treatise on theology & decision theory & cosmology & moral philosophy, which for better or worse I am going to write in English and not in Latin.
But anyway, I’ll say now that my arguments don’t work for utilitarians, at least for preference utilitarianism the way it’s normally considered. Even if the argument’s valid it still probably doesn’t sway people who are using e.g. the parliamentary meta-moral system and who give much weight to utilitarian intuitions.
My subjective anticipation is mollified by the thought that I’ll probably either never experience dying or wake up to find that I’ve been in an ancestral simulation.
This feels very close to my moral intuitions, but I cognitively believe very strongly that things can be bad even if there are no observers left to regret that they happened. That follows logically from the moral intuition that people suffering where you can’t see is still bad.
...is that a natural moral intuition? I don’t think so. That could explain the dissonance.
This, along with the simulation argument, is why I’m not too emotionally stressed out with feelings of impending doom that seem to afflict some people familiar with SIAI’s ideas. My subjective anticipation is mollified by the thought that I’ll probably either never experience dying or wake up to find that I’ve been in an ancestral simulation, which leaves the part of me that wants to prevent all the empty galaxies from going to waste to work in peace. :)
Also, we can extend the argument a bit for those worried about “measure”. First, an FAI might be able to recreate people from historical clues (writings, recordings, other’s memories of them, etc.). But suppose that’s not possible. An FAI could still create a very large number of historically plausible people, and assuming FAIs in other Everett branches do the same, the fact that I probably won’t be recreated in this branch will be compensated for by the fact that I’ll be recreated in other branches where I currently don’t exist, thus preserving or even increasing my overall measure.
Update: Recent events have made me think that the fraction of advanced civilizations in the multiverse that are sane may be quite low. (It looks like our civilization will probably build a superintelligence while suffering from serious epistemic pathologies, and this may be be typical for civilizations throughout the multiverse.) So now I’m pretty worried about “waking up” in some kind of dystopia (powered or controlled by a superintelligence with twisted beliefs or values), either in my own future lightcone or in another universe.
Actually, I probably shouldn’t have been so optimistic even before the recent events...
I updated downward somewhat on the sanity of our civilization, but not to an extremely low value or from a high value. That update justifies only a partial update on the sanity of the average human civilization (maybe the problem is specific to our history and culture), which justifies only a partial update on the sanity of the average civilization (maybe the problem is specific to humans), which justifies only a partial update on the sanity of outcomes (maybe achieving high sanity is really easy or hard). So all things considered (aside from your second paragraph) it doesn’t seem like it justifies, say, doubling the amount of worry about these things.
I agree recent events don’t justify a huge update by themselves if one started with a reasonable prior. It’s more that I somehow failed to consider the possibility of that scenario, the recent events made me consider it, and that’s why it triggered a big update for me.
Now I’m curious. Does studying history make you update in a similar way? I feel that these times are not especially insane compared to the rest of history, though the scale of the problems might be bigger.
History is not one of my main interests, but I would guess yes, which is why I said “Actually, I probably shouldn’t have been so optimistic even before the recent events...”
Agreed. I think I was under the impression that western civilization managed to fix a lot of the especially bad epistemic pathologies in a somewhat stable way, and was unpleasantly surprised when that turned out not to be the case.
(Off-topic: Is this a decision theoretic thing or an epistemic thing? That is, do you really think the stars are actually out there to pluck in a substantial fraction of possible worlds, or are you just focusing on the worlds where they are there to pluck because it seems like we can’t do nearly as much if the stars aren’t real? Because I think I’ve come up with some good arguments against the latter and was planning on writing a post about it; but if you think the former is the case then I’d like to know what your arguments are, because I haven’t seen any really convincing ones. (Katja suggested that the opposite hypothesis—that superintelligences have already eaten the stars and are just misleading us, or we are in a simulation where the stars aren’t real—isn’t a “simple” hypothesis, but I don’t quite see why that would be.) What’s nice about postulating that the stars are just an illusion is that it means there probably isn’t actually a great filter, and we aren’t left with huge anthropic confusions about why we’re apparently so special.)
Assuming most worlds start out lifeless like ours, they must have lots of resources for “plucking” until somebody actually plucks them… I guess I’m not sure what you’re asking, or what is motivating the question. Maybe if you explain your own ideas a bit more? It sounds like you’re saying that we may not want to try to pluck the stars that are apparently out there. If so, what should we be trying to do instead?
I guess I didn’t clearly state the relevant hypothesis. The hypothesis is that the stars aren’t real, they’re just an illusion or a backdrop put their by superintelligences so we don’t see what’s actually going on. This would explain the great filter paradox (Fermi’s paradox) and would imply that if we build an AI then that doesn’t necessarily mean it’ll get to eat all the stars. If the stars are out there, we should pluck them—but are they out there? They’re like a stack of twenties on the ground, and it seems plausible they’ve already been plucked without our knowing. Maybe my previous comment will make more sense now. I’m wondering if your reasons for focusing on eating all the galaxies is because you think the galaxies actually haven’t already been eaten, or if it’s because even if it’s probable that they’ve actually already been eaten and our images of them are an illusion, most of the utility we can get is still concentrated in worlds where the galaxies haven’t already been eaten, so we should focus on those worlds. (This is sort of orthogonal to the simulation argument because it doesn’t necessitate that our metaphysical ideas about how simulations work make sense; the mechanism for the illusion works by purely physical means.)
If that’s the case, then I’d like to break out by building our own superintelligence to find and exploit whatever weaknesses might exist in the SIs that are boxing us in, or failing that, negotiate with them for a share of the universe. (Presumably they want something from us, or else why are they doing this?) Does that answer your question?
BTW, I’m interested in the “good arguments” that you mentioned earlier. Can you give a preview of them here?
If the SI wanted to fool us, why would it make us see something (a lifeless universe) that would make us infer that we’re being fooled by an SI?
It would seem it is trying to fool just the unenlightened masses. But the chosen few who see the Truth shall transcend all that...
Will, as per amit’s point, how do you anticipate your decision to tell us about the superintelligent fake stars hypothesis the decision of the superintelligences to create (or otherwise cause to exist) human life on earth with the illusion of living in a free universe?
All things considered (and assuming that hypothesis as a premise) I think you might have just unmade us a little bit. How diabolical!
I agree with your rationale, i.e. assuming they’re actually around then the superintelligences clearly aren’t trying that hard to be quiet, and are instead trying to stay on some sort of edge of influence or detectability. Remember, it’s only atheists who don’t suspect supernatural influence; a substantial fraction of humans already suspects weird shit is going on. Not “the chosen few” so much as “the chosen multitude”. Joke’s only on the atheists. Presumably if they wanted us to entirely discount the possibility that they were around then it would be easy for them to influence memetic evolution such that supernatural hypotheses were even less popular, e.g. by subtly keeping the U.S. from entering WWII and thus letting the Soviet Union capture all of Europe, and so on and so forth in that vein. (I’m not a superintelligence, surely they could come up with better memetic engineering strategies than I can.)
Nitpick: they don’t have to have chosen to create or cause us to exist as such, just left us alone. The latter is more likely because of game theoretic asymmetry (“do no harm”). Not sure if you intended that to be in your scope.
The intended scope was inclusive—I didn’t want to go overboard with making the caveat ‘or’ chain mention everything. The difference in actions and inactions when it comes to superintelligences that are controlling everything around us including giving us an entire fake universe to look at become rather meaningless.
Let’s hear them.
Forthcoming in the next year or so, in my treatise on theology & decision theory & cosmology & moral philosophy, which for better or worse I am going to write in English and not in Latin.
But anyway, I’ll say now that my arguments don’t work for utilitarians, at least for preference utilitarianism the way it’s normally considered. Even if the argument’s valid it still probably doesn’t sway people who are using e.g. the parliamentary meta-moral system and who give much weight to utilitarian intuitions.
Oh thank all the gods of counterfactual violence.
This feels very close to my moral intuitions, but I cognitively believe very strongly that things can be bad even if there are no observers left to regret that they happened. That follows logically from the moral intuition that people suffering where you can’t see is still bad.
...is that a natural moral intuition? I don’t think so. That could explain the dissonance.