All right, that was much clearer, thanks ! But then, why do we care about a “counterfactual world” at all ?
My impression was that Eliezer claimed that we need a counterfactual world in order to evaluate counterfactuals. But I argue that this is not true; for example, we could ask our model “what are my chances of getting cancer ?” just as easily as “what are my chances of getting cancer if I stop smoking right now ?”, and get useful answers back—without constructing any alternate realities. So why do we need to worry about a fully-realized counterfactual universe ?
Exactly. We don’t. There are only real models, and logical descriptions of models. Some of those descriptions are of the form “our universe, but with tweak X”, which are “counterfactuals”. The problem is that when our brains do counterfactual modeling, it feels very similar to when we are just doing actual-world modeling. Hence the sensation that there is some actual world which is like the counterfactual-type model we are using.
My impression was that Eliezer went much farther than that, and claimed that in order to do counterfactual modeling at all, we’d have to create an entire counterfactual world, or else our models won’t make sense. This is different from saying, “our brains don’t work right, so we’ve got to watch out for that”.
All right, that was much clearer, thanks ! But then, why do we care about a “counterfactual world” at all ?
My impression was that Eliezer claimed that we need a counterfactual world in order to evaluate counterfactuals. But I argue that this is not true; for example, we could ask our model “what are my chances of getting cancer ?” just as easily as “what are my chances of getting cancer if I stop smoking right now ?”, and get useful answers back—without constructing any alternate realities. So why do we need to worry about a fully-realized counterfactual universe ?
Exactly. We don’t. There are only real models, and logical descriptions of models. Some of those descriptions are of the form “our universe, but with tweak X”, which are “counterfactuals”. The problem is that when our brains do counterfactual modeling, it feels very similar to when we are just doing actual-world modeling. Hence the sensation that there is some actual world which is like the counterfactual-type model we are using.
My impression was that Eliezer went much farther than that, and claimed that in order to do counterfactual modeling at all, we’d have to create an entire counterfactual world, or else our models won’t make sense. This is different from saying, “our brains don’t work right, so we’ve got to watch out for that”.
I definitely didn’t understand him to be saying that. If that’s what he meant then I’d disagree.