As faul_sname said below, one way to settle the wager—and I mean an actual wager in our current world, where we don’t have access to Oracle AIs—would be to aggregate historical data about presidential assassinations in general, and assassination attempts on Kennedys in particular, and build a model out of them.
We could then say, “Ok, there’s a 82% chance that, in the absence of Oswald, someone would’ve tried to assassinate Kennedy, and there’s a 63% chance that this attempt would’ve succeeded, so there’s about a 52% chance that someone would’ve killed Kennedy after all, and thus you owe me about half of the prize money”.
...which would be settling a wager about the causal model that you built. The closer your causal model comes to accurately reflecting the “counterfactual world” that it is supposed to refer or correspond to, the more it actually instantiates that world. (Except that by performing counterfactual surgery, you have inserted yourself into the causal mini-universe that you’ve built.) The “counterfactual” stops being counter, and starts being factual.
A counterfactual world doesn’t exist (I think?), whereas your model does. If your model is a full-blown Planck-scale-detailed simulation of a universe, then it is a physical thing which fits very well your logical description of a counterfactual world. E.g., if you make a perfect simulation of a universe with the same laws of physics as ours, but where you surgically alter it so that Oswald misses, then you have built an “accurate” model of that counterfactual—that is, one of the many models that satisfy the (quasi-)logical description, “Everything is the same except Oswald didn’t kill Kennedy”.
A model is closer to the counterfactual when the model better satisfies the conditions of the counterfactual. A statistical model of the sort we use today can be very effective in limited domains, but it is a million miles away from actually satisfying conditions of a counterfactual universe. For example, consider Eliezer’s diagram for the “Oswald didn’t kill Kennedy” model. It uses the impressive, modern math of conditional probability—but it has five nodes. I would venture to guess that our universe has more than five nodes, so the model does not fit the description “a great big causal universe in all its glory, but where Oswald didn’t kill Kennedy”.
More realistically:
We collect some medical data from the person [who wants to buy cancer insurance from us], feed it into our statistical model (which has been trained on a large number of past cases), and it tells us, “there’s a 52% chance this person will develop cancer in the next 20 years”. Now we can quote him a reasonable price.
Our model might have millions of “neurons” in a net, or millions of nodes in a PGM, or millions of feature parameters for regression… but that is nowhere near the complexity contained in .1% of one millionth of the pinky toe of the person we are supposedly modelling. It works out nicely for us because we only want to ask our model a few high-level questions, and because we snuck in a whole bunch of computation, e.g. when we used our visual cortex to read the instrument that measures the patient’s blood pressure. But our model is not accurate in an absolute sense.
This last example is a model of another physical system. The Oswald example is supposed to model a counterfactual. Or actually, to put it better: a model doesn’t describe a counterfactual, a counterfactual describes a model.
Sorry, I still don’t think I understand your objection.
Let’s say that, instead of cancer insurance, our imaginary insurance company was selling assassination insurance. A politician would come to us; we’d feed what we know about him into our model; and we’d quote him a price based on the probability that he’d be assassinated.
Are you saying that such a feat cannot realistically be accomplished ? If so, what’s the difference between this and cancer insurance ? After all, “how likely is this guy to get killed” is also a “high-level question”, just as “how likely is this guy to get cancer”—isn’t it ?
Someone could realistically predict whether or not you will be assassinated, with high confidence, using (perhaps much larger) versions of modern statistical computations.
To do so, they would not need to construct anything so elaborate as a computation that constitutes a chunk of a full blown causal universe. They could ignore quarks and such, and still be pretty accurate.
Such a model would not refer to a real thing, called a “counterfactual world”, which is a causal universe like ours but with some changes. Such a thing doesn’t exist anywhere.
...unless we make it exist by performing a computation with all the causality-structure of our universe, but which has tweaks according to what we are testing. This is what I meant by a more accurate model.
All right, that was much clearer, thanks ! But then, why do we care about a “counterfactual world” at all ?
My impression was that Eliezer claimed that we need a counterfactual world in order to evaluate counterfactuals. But I argue that this is not true; for example, we could ask our model “what are my chances of getting cancer ?” just as easily as “what are my chances of getting cancer if I stop smoking right now ?”, and get useful answers back—without constructing any alternate realities. So why do we need to worry about a fully-realized counterfactual universe ?
Exactly. We don’t. There are only real models, and logical descriptions of models. Some of those descriptions are of the form “our universe, but with tweak X”, which are “counterfactuals”. The problem is that when our brains do counterfactual modeling, it feels very similar to when we are just doing actual-world modeling. Hence the sensation that there is some actual world which is like the counterfactual-type model we are using.
My impression was that Eliezer went much farther than that, and claimed that in order to do counterfactual modeling at all, we’d have to create an entire counterfactual world, or else our models won’t make sense. This is different from saying, “our brains don’t work right, so we’ve got to watch out for that”.
The closer your causal model comes to accurately reflecting the “counterfactual world” that it is supposed to refer or correspond to...
I’m not sure I understand this statement. Forget Oswald for a moment, and let’s imagine we’re working at an insurance company. A person comes to us, and says, “sell me some cancer insurance”. This person is currently does not have cancer, but there’s a chance that he could develop cancer in the future (let’s pretend there’s only one type of cancer in the world, just for simplicity). We collect some medical data from the person, feed it into our statistical model (which has been trained on a large number of past cases), and it tells us, “there’s a 52% chance this person will develop cancer in the next 20 years”. Now we can quote him a reasonable price.
How is this situation different from the “killing Kennedy” scenario ? We are still talking about a counterfactual, since Kennedy is alive and our applicant is cancer-free.
As faul_sname said below, one way to settle the wager—and I mean an actual wager in our current world, where we don’t have access to Oracle AIs—would be to aggregate historical data about presidential assassinations in general, and assassination attempts on Kennedys in particular, and build a model out of them.
We could then say, “Ok, there’s a 82% chance that, in the absence of Oswald, someone would’ve tried to assassinate Kennedy, and there’s a 63% chance that this attempt would’ve succeeded, so there’s about a 52% chance that someone would’ve killed Kennedy after all, and thus you owe me about half of the prize money”.
...which would be settling a wager about the causal model that you built. The closer your causal model comes to accurately reflecting the “counterfactual world” that it is supposed to refer or correspond to, the more it actually instantiates that world. (Except that by performing counterfactual surgery, you have inserted yourself into the causal mini-universe that you’ve built.) The “counterfactual” stops being counter, and starts being factual.
Thanks to this comment something in my brain just made an audible ‘click’, and I understand this current sequence much better. Thank you.
How do you know how close it is? And what’s the difference between a counterfactual world and a model of it?
TL;DR: skip to the last sentence.
A counterfactual world doesn’t exist (I think?), whereas your model does. If your model is a full-blown Planck-scale-detailed simulation of a universe, then it is a physical thing which fits very well your logical description of a counterfactual world. E.g., if you make a perfect simulation of a universe with the same laws of physics as ours, but where you surgically alter it so that Oswald misses, then you have built an “accurate” model of that counterfactual—that is, one of the many models that satisfy the (quasi-)logical description, “Everything is the same except Oswald didn’t kill Kennedy”.
A model is closer to the counterfactual when the model better satisfies the conditions of the counterfactual. A statistical model of the sort we use today can be very effective in limited domains, but it is a million miles away from actually satisfying conditions of a counterfactual universe. For example, consider Eliezer’s diagram for the “Oswald didn’t kill Kennedy” model. It uses the impressive, modern math of conditional probability—but it has five nodes. I would venture to guess that our universe has more than five nodes, so the model does not fit the description “a great big causal universe in all its glory, but where Oswald didn’t kill Kennedy”.
More realistically:
Our model might have millions of “neurons” in a net, or millions of nodes in a PGM, or millions of feature parameters for regression… but that is nowhere near the complexity contained in .1% of one millionth of the pinky toe of the person we are supposedly modelling. It works out nicely for us because we only want to ask our model a few high-level questions, and because we snuck in a whole bunch of computation, e.g. when we used our visual cortex to read the instrument that measures the patient’s blood pressure. But our model is not accurate in an absolute sense.
This last example is a model of another physical system. The Oswald example is supposed to model a counterfactual. Or actually, to put it better: a model doesn’t describe a counterfactual, a counterfactual describes a model.
Sorry, I still don’t think I understand your objection.
Let’s say that, instead of cancer insurance, our imaginary insurance company was selling assassination insurance. A politician would come to us; we’d feed what we know about him into our model; and we’d quote him a price based on the probability that he’d be assassinated.
Are you saying that such a feat cannot realistically be accomplished ? If so, what’s the difference between this and cancer insurance ? After all, “how likely is this guy to get killed” is also a “high-level question”, just as “how likely is this guy to get cancer”—isn’t it ?
Yeah we are definitely talking past each other.
Someone could realistically predict whether or not you will be assassinated, with high confidence, using (perhaps much larger) versions of modern statistical computations.
To do so, they would not need to construct anything so elaborate as a computation that constitutes a chunk of a full blown causal universe. They could ignore quarks and such, and still be pretty accurate.
Such a model would not refer to a real thing, called a “counterfactual world”, which is a causal universe like ours but with some changes. Such a thing doesn’t exist anywhere.
...unless we make it exist by performing a computation with all the causality-structure of our universe, but which has tweaks according to what we are testing. This is what I meant by a more accurate model.
All right, that was much clearer, thanks ! But then, why do we care about a “counterfactual world” at all ?
My impression was that Eliezer claimed that we need a counterfactual world in order to evaluate counterfactuals. But I argue that this is not true; for example, we could ask our model “what are my chances of getting cancer ?” just as easily as “what are my chances of getting cancer if I stop smoking right now ?”, and get useful answers back—without constructing any alternate realities. So why do we need to worry about a fully-realized counterfactual universe ?
Exactly. We don’t. There are only real models, and logical descriptions of models. Some of those descriptions are of the form “our universe, but with tweak X”, which are “counterfactuals”. The problem is that when our brains do counterfactual modeling, it feels very similar to when we are just doing actual-world modeling. Hence the sensation that there is some actual world which is like the counterfactual-type model we are using.
My impression was that Eliezer went much farther than that, and claimed that in order to do counterfactual modeling at all, we’d have to create an entire counterfactual world, or else our models won’t make sense. This is different from saying, “our brains don’t work right, so we’ve got to watch out for that”.
I definitely didn’t understand him to be saying that. If that’s what he meant then I’d disagree.
I’m not sure I understand this statement. Forget Oswald for a moment, and let’s imagine we’re working at an insurance company. A person comes to us, and says, “sell me some cancer insurance”. This person is currently does not have cancer, but there’s a chance that he could develop cancer in the future (let’s pretend there’s only one type of cancer in the world, just for simplicity). We collect some medical data from the person, feed it into our statistical model (which has been trained on a large number of past cases), and it tells us, “there’s a 52% chance this person will develop cancer in the next 20 years”. Now we can quote him a reasonable price.
How is this situation different from the “killing Kennedy” scenario ? We are still talking about a counterfactual, since Kennedy is alive and our applicant is cancer-free.
See my reply above, specifically the last paragraph.