Re ETA: Nesov said that particle physics are this way because you only care about the worlds where they are this way. Just like your explanation of probabilities. :-)
A correction (though I mixed that up in comments too): what we anticipate is not necessarily linked to what we care about. Particle physics is this way because we anticipate worlds in which it’s this way, but we may well care about other worlds in which it isn’t.
Anticipation is about what we can control (as evolution saw the possibility, based on the past in the same world), not what we want to happen. Since evolution is causal, we don’t anticipate acausal control, but we can care about acausal control.
The useful conclusion seems to be that the concept of anticipation (and hence, reality/particle physics) is not fundamental in the decision theory sense, it’s more like the concept of hunger: something we can feel, can have accurate theories about, but doesn’t answer questions about the nature of goodness.
Don’t know about you, but I anticipate acausal control, to a degree. I have a draft post titled “Taking UDT Seriously” featuring such shining examples as: if a bully attacks you, you should try to do maximum damage while disregarding any harm to yourself, because it’s good for you to be predicted as such a person. UDT is seriously scary when applied to daily life, even without superintelligences.
I have a draft post titled “Taking UDT Seriously” featuring such shining examples as: if a bully attacks you, you should try to do maximum damage while disregarding any harm to yourself, because it’s good for you to be predicted as such a person.
I don’t think UDT (or a variant of UDT that applies to humans that nobody has really formulated yet, because the original UDT assumed that one has access to one’s source code) implies this, because the difference between P(bully predicts me as causing a lot of damaged | I try to cause maximum damage) and P(bully predicts me as causing a lot of damaged | I don’t try to cause maximum damage) seems quite small (because the bully can’t see or predict my source code and also can’t do a very good job of simulating or predicting my decisions), while the negative consequences of trying to cause maximum damage seems quite high if the bully fails to be preemptively dissuaded (e.g., being arrested or sued or disciplined or retaliated against).
(Not sure if you still endorse this comment, 9 years later, but I sometimes see what I consider to be overly enthusiastic applications of UDT, and as the person most associated with UDT I feel an obligation to push against that.)
You seem to be mixing up ambient control within a single possible world with assignment of probability measure to the set of possible worlds (which anticipation is all about). You control the bully by being expected (credibly threatening) to retaliate within a single possible world. Acausal control is about controlling one possible world from another, while ambient (logical) control is about deciding the way your possible world will turn out (what you discussed in the recent posts).
More generally, logical control can be used to determine an arbitrary concept, including that of utility of all possible worlds considered together, or of all mathematical structures. Acausal control is just a specific way in which logical control can happen.
Yep. I can’t seem to memorize the correct use of our new terminology (acausal/ambient/logical/etc), so I just use “acausal” as an informal umbrella term for all kinds of winning behavior that don’t seem to be recommended by CDT from the agent’s narrow point of view. Like one-boxing in Newcomb’s Problem, or being ready to fight in order to release yet-undiscovered pheromones or something.
Re ETA: Nesov said that particle physics are this way because you only care about the worlds where they are this way. Just like your explanation of probabilities. :-)
A correction (though I mixed that up in comments too): what we anticipate is not necessarily linked to what we care about. Particle physics is this way because we anticipate worlds in which it’s this way, but we may well care about other worlds in which it isn’t.
Anticipation is about what we can control (as evolution saw the possibility, based on the past in the same world), not what we want to happen. Since evolution is causal, we don’t anticipate acausal control, but we can care about acausal control.
The useful conclusion seems to be that the concept of anticipation (and hence, reality/particle physics) is not fundamental in the decision theory sense, it’s more like the concept of hunger: something we can feel, can have accurate theories about, but doesn’t answer questions about the nature of goodness.
Don’t know about you, but I anticipate acausal control, to a degree. I have a draft post titled “Taking UDT Seriously” featuring such shining examples as: if a bully attacks you, you should try to do maximum damage while disregarding any harm to yourself, because it’s good for you to be predicted as such a person. UDT is seriously scary when applied to daily life, even without superintelligences.
I don’t think UDT (or a variant of UDT that applies to humans that nobody has really formulated yet, because the original UDT assumed that one has access to one’s source code) implies this, because the difference between P(bully predicts me as causing a lot of damaged | I try to cause maximum damage) and P(bully predicts me as causing a lot of damaged | I don’t try to cause maximum damage) seems quite small (because the bully can’t see or predict my source code and also can’t do a very good job of simulating or predicting my decisions), while the negative consequences of trying to cause maximum damage seems quite high if the bully fails to be preemptively dissuaded (e.g., being arrested or sued or disciplined or retaliated against).
(Not sure if you still endorse this comment, 9 years later, but I sometimes see what I consider to be overly enthusiastic applications of UDT, and as the person most associated with UDT I feel an obligation to push against that.)
Can you post this in the discussion area?
You seem to be mixing up ambient control within a single possible world with assignment of probability measure to the set of possible worlds (which anticipation is all about). You control the bully by being expected (credibly threatening) to retaliate within a single possible world. Acausal control is about controlling one possible world from another, while ambient (logical) control is about deciding the way your possible world will turn out (what you discussed in the recent posts).
More generally, logical control can be used to determine an arbitrary concept, including that of utility of all possible worlds considered together, or of all mathematical structures. Acausal control is just a specific way in which logical control can happen.
Yep. I can’t seem to memorize the correct use of our new terminology (acausal/ambient/logical/etc), so I just use “acausal” as an informal umbrella term for all kinds of winning behavior that don’t seem to be recommended by CDT from the agent’s narrow point of view. Like one-boxing in Newcomb’s Problem, or being ready to fight in order to release yet-undiscovered pheromones or something.
“Correct” is too strong a descriptor, it’s mostly just me pushing standardization of terminology, based on how it seems to have been used in the past.