I was assuming the worst, and guessing that there are diminishing marginal returns once your odds of a successful takeover get above ~50%, so instead of going all in on accurate predictions of the weakest and ripest target universe, you hedge and target a few universes.
There are massive diminishing marginal returns; in a naive model you’d expect essentially *every* universe to get predicted in this way.
But Wei Dai’s basic point still stands. The speed prior isn’t the actual prior over universes (i.e. doesn’t reflect the real degree of moral concern that we’d use to weigh consequences of our decisions in different possible worlds). If you have some data that you are trying to predict, you can do way better than the speed prior by (a) using your real prior to estimate or sample from the actual posterior distribution over physical law, (b) using engineering reasoning to make the utility maximizing predictions, given that faster predictions are going to get given more weight.
(You don’t really need this to run Wei Dai’s argument, because there seem to be dozens of ways in which the aliens get an advantage over the intended physical model.)
When universal prior is next to speed update, this is naturally conceptualized as a speed prior, and when it’s last, it is naturally conceptualized as “engineering reasoning” identifying faster predictions.
I happy to go with the second order if you prefer, in part because I think they do commute—all these updates just change the weights on measures that get mixed together to be piped to output during the “predict accurately” phase.
There are massive diminishing marginal returns; in a naive model you’d expect essentially *every* universe to get predicted in this way.
But Wei Dai’s basic point still stands. The speed prior isn’t the actual prior over universes (i.e. doesn’t reflect the real degree of moral concern that we’d use to weigh consequences of our decisions in different possible worlds). If you have some data that you are trying to predict, you can do way better than the speed prior by (a) using your real prior to estimate or sample from the actual posterior distribution over physical law, (b) using engineering reasoning to make the utility maximizing predictions, given that faster predictions are going to get given more weight.
(You don’t really need this to run Wei Dai’s argument, because there seem to be dozens of ways in which the aliens get an advantage over the intended physical model.)
I think what you’re saying is that the following don’t commute:
“real prior” (universal prior) + speed update + anthropic update + can-do update + worth-doing update
compared to
universal prior + anthropic update + can-do update + worth-doing update + speed update
When universal prior is next to speed update, this is naturally conceptualized as a speed prior, and when it’s last, it is naturally conceptualized as “engineering reasoning” identifying faster predictions.
I happy to go with the second order if you prefer, in part because I think they do commute—all these updates just change the weights on measures that get mixed together to be piped to output during the “predict accurately” phase.