Exactly. We don’t. There are only real models, and logical descriptions of models. Some of those descriptions are of the form “our universe, but with tweak X”, which are “counterfactuals”. The problem is that when our brains do counterfactual modeling, it feels very similar to when we are just doing actual-world modeling. Hence the sensation that there is some actual world which is like the counterfactual-type model we are using.
My impression was that Eliezer went much farther than that, and claimed that in order to do counterfactual modeling at all, we’d have to create an entire counterfactual world, or else our models won’t make sense. This is different from saying, “our brains don’t work right, so we’ve got to watch out for that”.
Exactly. We don’t. There are only real models, and logical descriptions of models. Some of those descriptions are of the form “our universe, but with tweak X”, which are “counterfactuals”. The problem is that when our brains do counterfactual modeling, it feels very similar to when we are just doing actual-world modeling. Hence the sensation that there is some actual world which is like the counterfactual-type model we are using.
My impression was that Eliezer went much farther than that, and claimed that in order to do counterfactual modeling at all, we’d have to create an entire counterfactual world, or else our models won’t make sense. This is different from saying, “our brains don’t work right, so we’ve got to watch out for that”.
I definitely didn’t understand him to be saying that. If that’s what he meant then I’d disagree.