When it is technologically feasible for our descendants to simulate our world, they will not because it will seem cruel (conditional on friendly descendants, such as FAI or successful uploads with gradual adjustments to architecture.) I would be surprised if it were different, but not THAT surprised. (~70%)
Upvoted for disagreement: postulating that most of my measure comes from simulations helps resolve a host of otherwise incredibly confusing anthropic questions.
I like thinking about being in a simulation, and since it makes no practical difference (except if you go crazy and think it’s a good idea to test every possible means of ‘praying’ to any possible interested and intervening simulator god), I don’t think we need to agree on the odds that we are simulated.
However, I’d say that it seems impossible to me to defend any particular choice of prior probability for the simulation vs. non-simulation cases. So while it matters how well such a hypothesis explains the data, I have no idea if I should be raising p(simulation) by 1000db from −10db or from −10000000db. If you have 1000db worth of predictions following from a disjunction over possible simulations, then that’s of course super interesting and amusing even if I can’t decide what my prior belief is.
When it is technologically feasible for our descendants to simulate our world, they will not because it will seem cruel (conditional on friendly descendants, such as FAI or successful uploads with gradual adjustments to architecture.) I would be surprised if it were different, but not THAT surprised. (~70%)
I agree with you up ’til the first comma.
ETA: … the only comma, I guess.
Upvoted for disagreement: postulating that most of my measure comes from simulations helps resolve a host of otherwise incredibly confusing anthropic questions.
I’m sure there’s more to it than came across in that sentence, but that sounds like shaky grounds for belief.
Scientifically it’s bunk but Bayesically it seems sound to me. A simple hypothesis that explains many otherwise unlikely pieces of evidence.
That said, I do have other reasons, but explaining the intuitions would not fit within the margins of my time.
I like thinking about being in a simulation, and since it makes no practical difference (except if you go crazy and think it’s a good idea to test every possible means of ‘praying’ to any possible interested and intervening simulator god), I don’t think we need to agree on the odds that we are simulated.
However, I’d say that it seems impossible to me to defend any particular choice of prior probability for the simulation vs. non-simulation cases. So while it matters how well such a hypothesis explains the data, I have no idea if I should be raising p(simulation) by 1000db from −10db or from −10000000db. If you have 1000db worth of predictions following from a disjunction over possible simulations, then that’s of course super interesting and amusing even if I can’t decide what my prior belief is.
Up voted because I disagree with your first statement.
Assuming reasonably complex values of stimulate, i.e., second life doesn’t count.