Yes … I’ve never been quite clear what practical difference it makes whether I’m in a simulation or not. (The Wikipedia article doesn’t really give me much, though it’s possible I’m just underthinking it.)
It doesn’t help that humans do in fact appear to be brains in vats, sustained on a nutrient solution and fed external sensory input—the vat being our skulls. I posit that this is why “brain in a vat” arguments are of philosophical interest to humans.
As much as I like Hanson, generalizing from fictional evidence seems the wrong way to go. In that essay, he also only considers being simulated by our descendants, and tacitly assumes that our descendants will be human.
I didn’t say they were implausible. I said there was a vastly larger class of possible simulators.
They represent the case where we can say the most (if it is true that we are in an ancestor simulation).
First, we can say the same things about them whether or not we’re actually being simulated. Secondly, everything we can say about them is predicated on our descendants being similar to ourselves, which (I claim) is an unreasonable assumption.
Sure—I just meant that they were worthy of consideration.
I said there was a vastly larger class of possible simulators.
Well, yes, but we have to weight according to the chances of them actually being run. Take movies, for instance. A non-trivial fraction of them are simulations of the past—since we are interested in the past.
Well, yes, but we have to weight according to the chances of them actually being run. Take movies, for instance. A non-trivial fraction of them are simulations of the past—since we are interested in the past.
As far as I can tell, your argument is that the proportion of simulations of us that are run by our descendents is significant with respect to the total number of simulations of us, because the number of simulations we currently run of our past is significant with respect to the total number of simulations of our past.
My argument is that we all have no data about either proportion, so it is ineffective to specialize our simulation arguments to the subclass of those simulations made by our descendants.
Well, yes, but we have to weight according to the chances of them actually being run. Take movies, for instance. A non-trivial fraction of them are simulations of the past—since we are interested in the past.
As far as I can tell, your argument is that the proportion of simulations of us that are run by our descendents is significant with respect to the total number of simulations of us, because the number of simulations we currently run of our past is significant with respect to the total number of simulations of our past.
I don’t think I ever said that. Rather, I was criticising the idea of comparing the number of past simulations with the number of “possible simulations”—by pointing out that considering the number of possible simulations ignored the important issue of motive—and I gave an example illustrating how motive might matter.
Obvious alternatives to ancestor simulations include optimisationverse and the adapted universe. We do have a whole universe worth’s of data about which idea is more likely.
Yes … I’ve never been quite clear what practical difference it makes whether I’m in a simulation or not. (The Wikipedia article doesn’t really give me much, though it’s possible I’m just underthinking it.)
It doesn’t help that humans do in fact appear to be brains in vats, sustained on a nutrient solution and fed external sensory input—the vat being our skulls. I posit that this is why “brain in a vat” arguments are of philosophical interest to humans.
Right. So, some papers exist on that topic. Perhaps start with: How To Live In A Simulation by Robin Hanson.
As much as I like Hanson, generalizing from fictional evidence seems the wrong way to go. In that essay, he also only considers being simulated by our descendants, and tacitly assumes that our descendants will be human.
Ancestor simulations are not that implausible.
They represent the case where we can say the most (if it is true that we are in an ancestor simulation).
However, certainly there are other possibilities.
I didn’t say they were implausible. I said there was a vastly larger class of possible simulators.
First, we can say the same things about them whether or not we’re actually being simulated. Secondly, everything we can say about them is predicated on our descendants being similar to ourselves, which (I claim) is an unreasonable assumption.
Sure—I just meant that they were worthy of consideration.
Well, yes, but we have to weight according to the chances of them actually being run. Take movies, for instance. A non-trivial fraction of them are simulations of the past—since we are interested in the past.
As far as I can tell, your argument is that the proportion of simulations of us that are run by our descendents is significant with respect to the total number of simulations of us, because the number of simulations we currently run of our past is significant with respect to the total number of simulations of our past.
My argument is that we all have no data about either proportion, so it is ineffective to specialize our simulation arguments to the subclass of those simulations made by our descendants.
I don’t think I ever said that. Rather, I was criticising the idea of comparing the number of past simulations with the number of “possible simulations”—by pointing out that considering the number of possible simulations ignored the important issue of motive—and I gave an example illustrating how motive might matter.
Obvious alternatives to ancestor simulations include optimisationverse and the adapted universe. We do have a whole universe worth’s of data about which idea is more likely.
Oh, okay. Relevant in comparison to some other ad hoc simulator theory. I can understand that.