It is unreasonable to think that all simulations are ancestral anyway.
Point taken regarding ancestor simulations, but I don’t think that resolves the question. What we choose to do is still evidence about what others will choose to do whether or not the choice is about simulating ancestors or just other possible worlds.
as soon as you can make a complete ancestral simulation … you can be >99% that you live in a simulation
In Bostrom’s formulation there is also the possibility that civilizations capable of ancestor simulations will overwhelmingly choose not to. It’s not obvious to me that this is one of the horns of the trilemma to reject.
I can think of at least two reasons why it might be a convergent behavior not to run ancestor simulations:
1) Civilizations capable of running ancestor simulations might overwhelmingly have morals that dissuade them from subjecting sentient beings to such low standards of living as their ancestors had.
2) Such civilizations may wish to exert acausal control over whether they are in a simulation. This is the motivation for my question.
In Bostrom’s formulation there is also the possibility that civilizations capable of ancestor simulations will overwhelmingly choose not to. It’s not obvious to me that this is one of the horns of the trilemma to reject.
Again, you are making Bostrom’s mistake of focusing on ancestral simulations. This is likely why this option seems plausible to you like it did to him—it looks much more plausible that people will decide not to run any ancestral simulations because of their morals than it is that people will decide not to run any simulations whatsoever.
1) Civilizations capable of running ancestor simulations might overwhelmingly have morals that dissuade them from subjecting sentient beings to such low standards of living as their ancestors had.
This is theoretically possible but realistically there is little reason to expect all posthuman civilizations to have such morals in regards to arbitrary creatures. We certainly don’t seem to be the type of civilization which would sacrifice the utility gained by running simulations for some questionable moral reasons—or at least not with a probability that is close to 1. Additionally, The mindspace for all posthuman agents is huge—you need a large amount of evidence to conclude that it is likely for all posthuman civilizations to be so moral.
Such civilizations may wish to exert acausal control over whether they are in a simulation. This is the motivation for my question.
Similarly, mind space is huge and it seems really unlikely by default that most posthuman societies will never run a simulation just on that basis. Furthermore, it is enough if only 1 for every billion posthuman civilizations runs simulations for it to be more likely that we are in a simulation than not, provided that the average simulator civilization runs more than a billion simulation in it’s history.
Furthermore, in order for most posthuman civilizations to not run any simulations there needs to be some sort of a 100% efficent way to prevent rogue agents to develop simulations. This also could be possible but still mostly unlikely. Even if somehow all posthuman societies always decide to never run a single simulation (for which there is no evidence) it is unlikely that all those civilizations also have a world-wide simulation-prevention mechanism in place from the very moment when simulations are technologically possible in that world.
you are making Bostrom’s mistake of focusing on ancestral simulations
Again, this seems irrelevant. I talked about ancestor simulations because that’s how it’s worded in the Simulation Argument, but as I said in the post above, as far as I can tell the logic doesn’t depend on it. Just replace ‘simulations of ancestors’ with ‘simulations of worlds containing sentient beings’.
As for the rest of your post, those are fine arguments for why the second horn of the trilemma should be rejected. I don’t find them absolutely convincing, so I still assign non-negligible credence to option 2 (and thus still find the acausal control question interesting), but I don’t have strong counterarguments either, so if you do assign negligible credence to option 2, perhaps we’ll have to agree to disagree on this point.
Point taken regarding ancestor simulations, but I don’t think that resolves the question. What we choose to do is still evidence about what others will choose to do whether or not the choice is about simulating ancestors or just other possible worlds.
In Bostrom’s formulation there is also the possibility that civilizations capable of ancestor simulations will overwhelmingly choose not to. It’s not obvious to me that this is one of the horns of the trilemma to reject.
I can think of at least two reasons why it might be a convergent behavior not to run ancestor simulations:
1) Civilizations capable of running ancestor simulations might overwhelmingly have morals that dissuade them from subjecting sentient beings to such low standards of living as their ancestors had.
2) Such civilizations may wish to exert acausal control over whether they are in a simulation. This is the motivation for my question.
Again, you are making Bostrom’s mistake of focusing on ancestral simulations. This is likely why this option seems plausible to you like it did to him—it looks much more plausible that people will decide not to run any ancestral simulations because of their morals than it is that people will decide not to run any simulations whatsoever.
This is theoretically possible but realistically there is little reason to expect all posthuman civilizations to have such morals in regards to arbitrary creatures. We certainly don’t seem to be the type of civilization which would sacrifice the utility gained by running simulations for some questionable moral reasons—or at least not with a probability that is close to 1. Additionally, The mindspace for all posthuman agents is huge—you need a large amount of evidence to conclude that it is likely for all posthuman civilizations to be so moral.
Similarly, mind space is huge and it seems really unlikely by default that most posthuman societies will never run a simulation just on that basis. Furthermore, it is enough if only 1 for every billion posthuman civilizations runs simulations for it to be more likely that we are in a simulation than not, provided that the average simulator civilization runs more than a billion simulation in it’s history.
Furthermore, in order for most posthuman civilizations to not run any simulations there needs to be some sort of a 100% efficent way to prevent rogue agents to develop simulations. This also could be possible but still mostly unlikely. Even if somehow all posthuman societies always decide to never run a single simulation (for which there is no evidence) it is unlikely that all those civilizations also have a world-wide simulation-prevention mechanism in place from the very moment when simulations are technologically possible in that world.
Again, this seems irrelevant. I talked about ancestor simulations because that’s how it’s worded in the Simulation Argument, but as I said in the post above, as far as I can tell the logic doesn’t depend on it. Just replace ‘simulations of ancestors’ with ‘simulations of worlds containing sentient beings’.
As for the rest of your post, those are fine arguments for why the second horn of the trilemma should be rejected. I don’t find them absolutely convincing, so I still assign non-negligible credence to option 2 (and thus still find the acausal control question interesting), but I don’t have strong counterarguments either, so if you do assign negligible credence to option 2, perhaps we’ll have to agree to disagree on this point.
I do and based on the wording of your comment you have no real reason not to either.
Did you miss this part?
Nope. They weren’t meant to be absolutely convincing—option 2) is possible just not probable.
Perhaps. I will have to think about it some more.