Nate says: “You may have a scenario in mind that I overlooked (and I’d be interested to hear about it if so), but I’m not currently aware of a situation where the 1.1 patch is needed that doesn’t involve some sort of multi-agent coordination. I’ll note that a lot of the work that I (and various others) used to think was done by policy selection is in fact done by not-updating-on-your-observations instead. (E.g., FDT agents refuse blackmail because of the effects this has in the world where they weren’t blackmailed, despite how their observations say that that world is impossible.)”
Say there’s some logical random variable O you’re going to learn, which is either 0 or 1, with a prior 50% probability of being 1. After knowing the value of this variable, you take action 0 or 1. Some predictor doesn’t know the value of this variable, but does know your source code. This predictor predicts P(you take action 1 | O = 0) and P(you take action 1 | O = 1). Your utility only depends on these predictions; specifically, it is P(you take action 1 | O = 0) − 100(P(you take action 1 | O = 0)-P(you take action 1 | O = 1))^2.
This is a continuous coordination problem, and CDT-like graph intervention isn’t guaranteed to solve it, while policy selection is.
Cool. I hadn’t thought to frame those problems in predictor terms, and I agree now that “only matters in multi-agent dilemmas” is incorrect.
That said, it still seems to me like policy selection only matters in situations where, conceptually, winning requires something like multiple agents who run the same decision algorithm meeting and doing a bit of logically-prior coordination, and something kind of like this separates things like transparent Newcomb’s problem (where policy selection is not necessary) from the more coordination-shaped cases. The way the problems are classified in my head still involves me asking myself the question “well, do I need to get together and coordinate with all of the instances of me that appear in the problem logically-beforehand, or can we each individually wing it once we see our observations?”.
If anyone has examples where this classification is broken, I remain curious to hear them. Or, similar question: is there any disagreement on the weakened claim, “policy selection only matters in situations that can be transformed into multi-agent problems, where a problem is said to be ‘multi-agent’ if the winning strategy requires the agents to coordinate logically-before making their observations”?
but I’m not currently aware of a situation where the 1.1 patch is needed that doesn’t involve some sort of multi-agent coordination
I think the 1.1 patch is needed to solve problems with coordination/amnesia/prediction, and moreover these are all the same set of problems.
Coordination: two people wake up in rooms painted different colors (red and blue). Each is asked to choose a button (A or B). If they choose different buttons, both get $100. One possible winning strategy is red->A, blue->B.
Amnesia: on two consecutive days, you wake up with amnesia in rooms painted different colors and need to choose a button. If you choose different buttons on different days, you get $100. Winning strategy is same as above.
Prediction: you wake up in a room painted either red or blue and are asked to choose a button. At the same time, a predictor predicts what you would do if the room color was different. If that would lead to you choosing a different button, you get $100. Winning strategy is same as above.
Nate says: “You may have a scenario in mind that I overlooked (and I’d be interested to hear about it if so), but I’m not currently aware of a situation where the 1.1 patch is needed that doesn’t involve some sort of multi-agent coordination. I’ll note that a lot of the work that I (and various others) used to think was done by policy selection is in fact done by not-updating-on-your-observations instead. (E.g., FDT agents refuse blackmail because of the effects this has in the world where they weren’t blackmailed, despite how their observations say that that world is impossible.)”
Say there’s some logical random variable O you’re going to learn, which is either 0 or 1, with a prior 50% probability of being 1. After knowing the value of this variable, you take action 0 or 1. Some predictor doesn’t know the value of this variable, but does know your source code. This predictor predicts P(you take action 1 | O = 0) and P(you take action 1 | O = 1). Your utility only depends on these predictions; specifically, it is P(you take action 1 | O = 0) − 100(P(you take action 1 | O = 0)-P(you take action 1 | O = 1))^2.
This is a continuous coordination problem, and CDT-like graph intervention isn’t guaranteed to solve it, while policy selection is.
Nate:
[EDIT: retracted]
I think the 1.1 patch is needed to solve problems with coordination/amnesia/prediction, and moreover these are all the same set of problems.
Coordination: two people wake up in rooms painted different colors (red and blue). Each is asked to choose a button (A or B). If they choose different buttons, both get $100. One possible winning strategy is red->A, blue->B.
Amnesia: on two consecutive days, you wake up with amnesia in rooms painted different colors and need to choose a button. If you choose different buttons on different days, you get $100. Winning strategy is same as above.
Prediction: you wake up in a room painted either red or blue and are asked to choose a button. At the same time, a predictor predicts what you would do if the room color was different. If that would lead to you choosing a different button, you get $100. Winning strategy is same as above.