I’m confused… What you call the “Pure Reality” view seems to work just fine, no? (I think you had a different name for it, pure counterfactuals or something.) What do you need counterfactuals/Augmented Reality for? Presumably making decisions thanks to “having a choice” in this framework, right? In the pure reality framework the “student and the test” example one would dispassionately calculate what kind of a student algorithm passes the test, without talking about making a decision to study or not to study. Same with the Newcomb’s, of course, one just looks at what kind of agents end up with a given payoff. So… why pick an AR view over the PR view, what’s the benefit?
Excellent question. Maybe I haven’t framed this well enough.
We need a way of talking about the fact that both your outcome and your action are fixed by the past.
We also need a way of talking about the fact that we can augment the world with counterfactuals (Of course, since we don’t have complete knowledge of the world, we typically won’t know which is the factual and which are the counterfactuals).
And that these are two distinct ways of looking at the world.
I’ll try to think about a cleaner way of framing this, but do you have any suggestions?
(For the record, the term I used before was Raw Counterfactuals—meaning consistent counterfactuals—and that’s a different concept than looking at the world in a particular way).
(Something that might help is that if we are looking at multiple possible pure realities, then we’ve introduced counterfactuals as only one is true and “possible” is determined by the map rather than the territory)
I think the best way to explain this is to imagine characterise the two views as slightly different functions both of which return sets. Of course, the exact type representations isn’t the point. Instead, the types are just there to illustrate the difference between two slightly different concepts.
possible_world_pure() returns {x} where x is either <study & pass> or <beach & fail>, but we don’t know which one it will be
Once we’ve defined possible worlds, it naturally provides us a definition of possible actions and possible outcomes that matches what we expect. So for example:
And if we have a decide function that iterates over all the counterfactuals in the set and returns the highest one, we need to call it on possible_world_augmented() rather than possible_world_pure().
Note that they aren’t always this similar. For example, for Transparent Newcomb they are:
The point is that if we remain conscious of the type differences then we can avoid certain errors.
For example possible_outcome_pure() = {”PASS”}, doesn’t mean that possible_outcome_augmented() = {”PASS”}. It’s that later which would imply it doesn’t matter what the student does, not the former.
Hmm, it sort of makes sense, but possible_world_augmented() returns not just a set of worlds, but a set of pairs, (world, probability). For example for the transparent Newcomb’s you get possible_world_augmented() returns {(<1-box, million>, 1), (<2-box, thousand>, 0)}. And that’s enough to calculate EV, and conclude which “decision” (i.e. possible_world_augmented() given decision X) results in maxEV. Come to think of it, if you tabulate this, you end up with what I talked about in that post.
I’m confused… What you call the “Pure Reality” view seems to work just fine, no? (I think you had a different name for it, pure counterfactuals or something.) What do you need counterfactuals/Augmented Reality for? Presumably making decisions thanks to “having a choice” in this framework, right? In the pure reality framework the “student and the test” example one would dispassionately calculate what kind of a student algorithm passes the test, without talking about making a decision to study or not to study. Same with the Newcomb’s, of course, one just looks at what kind of agents end up with a given payoff. So… why pick an AR view over the PR view, what’s the benefit?
Excellent question. Maybe I haven’t framed this well enough.
We need a way of talking about the fact that both your outcome and your action are fixed by the past.
We also need a way of talking about the fact that we can augment the world with counterfactuals (Of course, since we don’t have complete knowledge of the world, we typically won’t know which is the factual and which are the counterfactuals).
And that these are two distinct ways of looking at the world.
I’ll try to think about a cleaner way of framing this, but do you have any suggestions?
(For the record, the term I used before was Raw Counterfactuals—meaning consistent counterfactuals—and that’s a different concept than looking at the world in a particular way).
(Something that might help is that if we are looking at multiple possible pure realities, then we’ve introduced counterfactuals as only one is true and “possible” is determined by the map rather than the territory)
I think the best way to explain this is to imagine characterise the two views as slightly different functions both of which return sets. Of course, the exact type representations isn’t the point. Instead, the types are just there to illustrate the difference between two slightly different concepts.
possible_world_pure() returns {x} where x is either <study & pass> or <beach & fail>, but we don’t know which one it will be
possible_world_augmented() returns {<study & pass>, <beach & fail>}
Once we’ve defined possible worlds, it naturally provides us a definition of possible actions and possible outcomes that matches what we expect. So for example:
size(possible_world_pure()) = size(possible_action_pure()) = size(possible_outcome_pure()) = 1
size(possible_world_augmented()) = size(possible_action_augmented()) = size(possible_outcome_augmented()) = 2
And if we have a decide function that iterates over all the counterfactuals in the set and returns the highest one, we need to call it on possible_world_augmented() rather than possible_world_pure().
Note that they aren’t always this similar. For example, for Transparent Newcomb they are:
possible_world_pure() returns {<1-box, million>}
possible_world_augmented() returns {<1-box, million>, <2-box, thousand>}
The point is that if we remain conscious of the type differences then we can avoid certain errors.
For example possible_outcome_pure() = {”PASS”}, doesn’t mean that possible_outcome_augmented() = {”PASS”}. It’s that later which would imply it doesn’t matter what the student does, not the former.
Hmm, it sort of makes sense, but possible_world_augmented() returns not just a set of worlds, but a set of pairs, (world, probability). For example for the transparent Newcomb’s you get possible_world_augmented() returns {(<1-box, million>, 1), (<2-box, thousand>, 0)}. And that’s enough to calculate EV, and conclude which “decision” (i.e. possible_world_augmented() given decision X) results in maxEV. Come to think of it, if you tabulate this, you end up with what I talked about in that post.