Sorry, I still don’t think I understand your objection.
Let’s say that, instead of cancer insurance, our imaginary insurance company was selling assassination insurance. A politician would come to us; we’d feed what we know about him into our model; and we’d quote him a price based on the probability that he’d be assassinated.
Are you saying that such a feat cannot realistically be accomplished ? If so, what’s the difference between this and cancer insurance ? After all, “how likely is this guy to get killed” is also a “high-level question”, just as “how likely is this guy to get cancer”—isn’t it ?
Someone could realistically predict whether or not you will be assassinated, with high confidence, using (perhaps much larger) versions of modern statistical computations.
To do so, they would not need to construct anything so elaborate as a computation that constitutes a chunk of a full blown causal universe. They could ignore quarks and such, and still be pretty accurate.
Such a model would not refer to a real thing, called a “counterfactual world”, which is a causal universe like ours but with some changes. Such a thing doesn’t exist anywhere.
...unless we make it exist by performing a computation with all the causality-structure of our universe, but which has tweaks according to what we are testing. This is what I meant by a more accurate model.
All right, that was much clearer, thanks ! But then, why do we care about a “counterfactual world” at all ?
My impression was that Eliezer claimed that we need a counterfactual world in order to evaluate counterfactuals. But I argue that this is not true; for example, we could ask our model “what are my chances of getting cancer ?” just as easily as “what are my chances of getting cancer if I stop smoking right now ?”, and get useful answers back—without constructing any alternate realities. So why do we need to worry about a fully-realized counterfactual universe ?
Exactly. We don’t. There are only real models, and logical descriptions of models. Some of those descriptions are of the form “our universe, but with tweak X”, which are “counterfactuals”. The problem is that when our brains do counterfactual modeling, it feels very similar to when we are just doing actual-world modeling. Hence the sensation that there is some actual world which is like the counterfactual-type model we are using.
My impression was that Eliezer went much farther than that, and claimed that in order to do counterfactual modeling at all, we’d have to create an entire counterfactual world, or else our models won’t make sense. This is different from saying, “our brains don’t work right, so we’ve got to watch out for that”.
Sorry, I still don’t think I understand your objection.
Let’s say that, instead of cancer insurance, our imaginary insurance company was selling assassination insurance. A politician would come to us; we’d feed what we know about him into our model; and we’d quote him a price based on the probability that he’d be assassinated.
Are you saying that such a feat cannot realistically be accomplished ? If so, what’s the difference between this and cancer insurance ? After all, “how likely is this guy to get killed” is also a “high-level question”, just as “how likely is this guy to get cancer”—isn’t it ?
Yeah we are definitely talking past each other.
Someone could realistically predict whether or not you will be assassinated, with high confidence, using (perhaps much larger) versions of modern statistical computations.
To do so, they would not need to construct anything so elaborate as a computation that constitutes a chunk of a full blown causal universe. They could ignore quarks and such, and still be pretty accurate.
Such a model would not refer to a real thing, called a “counterfactual world”, which is a causal universe like ours but with some changes. Such a thing doesn’t exist anywhere.
...unless we make it exist by performing a computation with all the causality-structure of our universe, but which has tweaks according to what we are testing. This is what I meant by a more accurate model.
All right, that was much clearer, thanks ! But then, why do we care about a “counterfactual world” at all ?
My impression was that Eliezer claimed that we need a counterfactual world in order to evaluate counterfactuals. But I argue that this is not true; for example, we could ask our model “what are my chances of getting cancer ?” just as easily as “what are my chances of getting cancer if I stop smoking right now ?”, and get useful answers back—without constructing any alternate realities. So why do we need to worry about a fully-realized counterfactual universe ?
Exactly. We don’t. There are only real models, and logical descriptions of models. Some of those descriptions are of the form “our universe, but with tweak X”, which are “counterfactuals”. The problem is that when our brains do counterfactual modeling, it feels very similar to when we are just doing actual-world modeling. Hence the sensation that there is some actual world which is like the counterfactual-type model we are using.
My impression was that Eliezer went much farther than that, and claimed that in order to do counterfactual modeling at all, we’d have to create an entire counterfactual world, or else our models won’t make sense. This is different from saying, “our brains don’t work right, so we’ve got to watch out for that”.
I definitely didn’t understand him to be saying that. If that’s what he meant then I’d disagree.