I didn’t say they were implausible. I said there was a vastly larger class of possible simulators.
They represent the case where we can say the most (if it is true that we are in an ancestor simulation).
First, we can say the same things about them whether or not we’re actually being simulated. Secondly, everything we can say about them is predicated on our descendants being similar to ourselves, which (I claim) is an unreasonable assumption.
Sure—I just meant that they were worthy of consideration.
I said there was a vastly larger class of possible simulators.
Well, yes, but we have to weight according to the chances of them actually being run. Take movies, for instance. A non-trivial fraction of them are simulations of the past—since we are interested in the past.
Well, yes, but we have to weight according to the chances of them actually being run. Take movies, for instance. A non-trivial fraction of them are simulations of the past—since we are interested in the past.
As far as I can tell, your argument is that the proportion of simulations of us that are run by our descendents is significant with respect to the total number of simulations of us, because the number of simulations we currently run of our past is significant with respect to the total number of simulations of our past.
My argument is that we all have no data about either proportion, so it is ineffective to specialize our simulation arguments to the subclass of those simulations made by our descendants.
Well, yes, but we have to weight according to the chances of them actually being run. Take movies, for instance. A non-trivial fraction of them are simulations of the past—since we are interested in the past.
As far as I can tell, your argument is that the proportion of simulations of us that are run by our descendents is significant with respect to the total number of simulations of us, because the number of simulations we currently run of our past is significant with respect to the total number of simulations of our past.
I don’t think I ever said that. Rather, I was criticising the idea of comparing the number of past simulations with the number of “possible simulations”—by pointing out that considering the number of possible simulations ignored the important issue of motive—and I gave an example illustrating how motive might matter.
Obvious alternatives to ancestor simulations include optimisationverse and the adapted universe. We do have a whole universe worth’s of data about which idea is more likely.
Ancestor simulations are not that implausible.
They represent the case where we can say the most (if it is true that we are in an ancestor simulation).
However, certainly there are other possibilities.
I didn’t say they were implausible. I said there was a vastly larger class of possible simulators.
First, we can say the same things about them whether or not we’re actually being simulated. Secondly, everything we can say about them is predicated on our descendants being similar to ourselves, which (I claim) is an unreasonable assumption.
Sure—I just meant that they were worthy of consideration.
Well, yes, but we have to weight according to the chances of them actually being run. Take movies, for instance. A non-trivial fraction of them are simulations of the past—since we are interested in the past.
As far as I can tell, your argument is that the proportion of simulations of us that are run by our descendents is significant with respect to the total number of simulations of us, because the number of simulations we currently run of our past is significant with respect to the total number of simulations of our past.
My argument is that we all have no data about either proportion, so it is ineffective to specialize our simulation arguments to the subclass of those simulations made by our descendants.
I don’t think I ever said that. Rather, I was criticising the idea of comparing the number of past simulations with the number of “possible simulations”—by pointing out that considering the number of possible simulations ignored the important issue of motive—and I gave an example illustrating how motive might matter.
Obvious alternatives to ancestor simulations include optimisationverse and the adapted universe. We do have a whole universe worth’s of data about which idea is more likely.
Oh, okay. Relevant in comparison to some other ad hoc simulator theory. I can understand that.