As I understand expanding candy into A and B but not expanding the other will make the ratios go differently.
In probablity one can have the assumtion of equiprobability, if you have no reason to think one is more likely than other then it might be reaosnable to assume they are equally likely.
If we knew what was important and what not we would be sure about the optimality. But since we think we don’t know it or might be in error about it we are treating that the value could be hiding anywhere. It seems to work in a world where each node is pretty comparably likely to contain value. I guess it comes from the effect of the relevant utility functions being defined in the terms of states we know about.
As I understand expanding candy into A and B but not expanding the other will make the ratios go differently.
What do you mean?
If we knew what was important and what not we would be sure about the optimality. But since we think we don’t know it or might be in error about it we are treating that the value could be hiding anywhere.
I’m not currently trying to make claims about what variants we’ll actually be likely to specify, if that’s what you mean. Just that in the reasonably broad set of situations covered by my theorems, the vast majority of variants of every objective function will make power-seeking optimal.
In “Invariances” picture 1 doesn’t have any letter outcomes. In picture 2 there are outcomes a,b,c,d,e,f. However if one had a,b and not c,d,e,f (but instead bar and hug) then the tree would look symmetrical. It feels like the argument is assuming that if we have different level of possible detail level the detail is approximately equal across the modeled universe. It would seem if one has a more detailed (“gear level”) model of one part and more approximate (“here be dragons”) kind of model for another one, the importance of the understood part will overwhelm.
As I understand expanding candy into A and B but not expanding the other will make the ratios go differently.
In probablity one can have the assumtion of equiprobability, if you have no reason to think one is more likely than other then it might be reaosnable to assume they are equally likely.
If we knew what was important and what not we would be sure about the optimality. But since we think we don’t know it or might be in error about it we are treating that the value could be hiding anywhere. It seems to work in a world where each node is pretty comparably likely to contain value. I guess it comes from the effect of the relevant utility functions being defined in the terms of states we know about.
What do you mean?
I’m not currently trying to make claims about what variants we’ll actually be likely to specify, if that’s what you mean. Just that in the reasonably broad set of situations covered by my theorems, the vast majority of variants of every objective function will make power-seeking optimal.
In “Invariances” picture 1 doesn’t have any letter outcomes. In picture 2 there are outcomes a,b,c,d,e,f. However if one had a,b and not c,d,e,f (but instead bar and hug) then the tree would look symmetrical. It feels like the argument is assuming that if we have different level of possible detail level the detail is approximately equal across the modeled universe. It would seem if one has a more detailed (“gear level”) model of one part and more approximate (“here be dragons”) kind of model for another one, the importance of the understood part will overwhelm.
Yeah, that’s right. (That is why I called it the “start” of a theory on invariances!)
I think that’s an interesting frame which I’ll return to when I think more about agents planning over an imperfect world model.