if there’s a sufficiently large amount of sufficiently precise data, then the physically-correct model’s high accuracy is going to swamp the complexity penalty
An intuitive example of where it would fail: suppose we are rolling a (possibly weighted) die, but we model it as drawing numbered balls from a box without replacement. If we roll a bunch of sixes, then the model thinks the box now contains fewer sixes, so the chance of a six is lower. If we modeled the weighted die correctly, then a bunch of sixes is evidence that’s it’s weighted toward six, so the chance of six should be higher.
Takeaway: Bernstein-Von Mises typically fails in cases where we’re restricting ourselves to a badly inaccurate model. You can look at the exact conditions yourself; as a general rule, we want those conditions to hold. I don’t think it’s a significant issue for my argument.
We could set up the IRL algorithm so that atom-level simulation is outside the space of models it considers. That would break my argument. But a limitation on the model space like that raises other issues, especially for FAI.
I don’t think that’s necessarily true?
Bernstein-Von Mises Theorem. It is indeed not always true, the theorem has some conditions.
An intuitive example of where it would fail: suppose we are rolling a (possibly weighted) die, but we model it as drawing numbered balls from a box without replacement. If we roll a bunch of sixes, then the model thinks the box now contains fewer sixes, so the chance of a six is lower. If we modeled the weighted die correctly, then a bunch of sixes is evidence that’s it’s weighted toward six, so the chance of six should be higher.
Takeaway: Bernstein-Von Mises typically fails in cases where we’re restricting ourselves to a badly inaccurate model. You can look at the exact conditions yourself; as a general rule, we want those conditions to hold. I don’t think it’s a significant issue for my argument.
We could set up the IRL algorithm so that atom-level simulation is outside the space of models it considers. That would break my argument. But a limitation on the model space like that raises other issues, especially for FAI.