If you’re ok with time inconsistent probabilities then you can be dutch booked.
Sure. Has some part of what I’ve written given the impression that I think time-inconsistent probabilities (or preferences) are OK?
I think of identity in terms of expectations. [...]
I want to give a thumbs-up to the policy of sharing ways-of-thinking-about-stuff. (Albeit that I think I see how that particular way of thinking about this stuff is probably confused. I’m still suggesting Tabooing “I”, “me”, “you”, “[me] waking up in …”, etc.) Thanks.
It’s not clear how the utility function in your first case says to accept the bet given that [...]
True, that part of what I wrote glossed over a large bunch of details (which may well be hiding confusion on my part). To try to quickly unpack that a bit:
In the given scenario, each agent cares about all similar agents.
Pretending to be a Solomonoff inductor, and updating on all available information/observations—without mapping low-level observations into confused nonsense like “I/me is observing X”—an agent in a green room ends up with p(coin=1) = 0.5.
The agent’s model of reality includes a model of {the agent itself, minus the agent’s model of itself (to avoid infinite recursion)}.
Looking at that model from a bird’s-eye-view, the agent searches for an action a that would maximize ∑w∈Wutility received by xeroxed agents in the version of w where this agent outputs a, where W is the set of “possible” worlds. (I.e.W is the set of worlds that are consistent with what has been observed thus far.) (We’re not bothering to weight the summed terms by p(w) because here all w are equiprobable.)
According to the agent’s model, all in-room-agents are running the same decision-algorithm, and thus all agents observing the same color output the same decision. This constrains what W can contain. In particular, it only contains worlds w where if this agent is outputting a, then also all other agents (in rooms of the same color) are also outputting a.
The agent’s available actions are “accept bet” and “decline bet”. When the agent considers those worlds where it (and thus, all other agents-in-green) outputs “accept bet”, it calculates the total utility gained by xeroxed agents to be higher, than in those worlds where it output “decline bet”.
The agent outputs “accept bet”.
If the above is not “maximizing utility”, then I’m confused about what (you mean by) “maximizing utility”. Did this clarify anything?
My version of the bet shouldn’t depend on if you care about other agents or not, because the bet doesn’t affect other agents.
It’s true that (if the rooms are appropriately sealed off from each other) the blobs-of-atoms in different rooms cannot causally affect each other. But given knowledge that all agents are exact copies of each other, the set of “possible” worlds is constrained to contain only {worlds where all agents (in rooms of the same color) output the same decision}. (I’m thinking very loosely in terms of something like Solomonoff induction here.) Thus it seems to me that {operating/deciding as if agents in other rooms “could” decide something different from each other} is like operating with the wrong set of “possible” worlds; i.e. like doing something wrong relative to Solomonoff induction, and/or having an incorrect model of reality.
I’ve spent a lot of time and written a handful of posts (including one on the interaction between Solomonoff and SIA) building my ontology. Parts may be mistaken but I don’t believe it’s “confused”. Tabooing core concepts will just make it more tedious to explain, probably with no real benefit.
In particular, the only actual observations anyone has are of the form “I have observed X”, and that needs to be the input into Solomonoff. You can’t input a bird’s eye view because you don’t have one.
Anyway, it seems weird that being altruistic affects the agent’s decision as to a purely local bet. You end up with the same answer as me on that question, acting “as if” the probability was 90%, but in a convoluted manner.
Maybe you should taboo probability. What does it mean to say that the probability is 50%, if not that you’ll accept purely local bets with better odds and not worse odds? The only purpose of probability in my ontology is for predictions for betting purposes (or decision making purposes that map onto that). Maybe it is your notion of probability that is confused.
Thanks for the suggestions. Clearly there’s still a lot of potentially fruitful disagreement here, some of it possibly mineable for insights; but I’m going to put this stuff on the shelf for now. Anyway, thanks.
Sure. Has some part of what I’ve written given the impression that I think time-inconsistent probabilities (or preferences) are OK?
I want to give a thumbs-up to the policy of sharing ways-of-thinking-about-stuff. (Albeit that I think I see how that particular way of thinking about this stuff is probably confused. I’m still suggesting Tabooing “I”, “me”, “you”, “[me] waking up in …”, etc.) Thanks.
True, that part of what I wrote glossed over a large bunch of details (which may well be hiding confusion on my part). To try to quickly unpack that a bit:
In the given scenario, each agent cares about all similar agents.
Pretending to be a Solomonoff inductor, and updating on all available information/observations—without mapping low-level observations into confused nonsense like “I/me is observing X”—an agent in a green room ends up with p(coin=1) = 0.5.
The agent’s model of reality includes a model of {the agent itself, minus the agent’s model of itself (to avoid infinite recursion)}.
Looking at that model from a bird’s-eye-view, the agent searches for an action a that would maximize ∑w∈Wutility received by xeroxed agents in the version of w where this agent outputs a, where W is the set of “possible” worlds. (I.e.W is the set of worlds that are consistent with what has been observed thus far.) (We’re not bothering to weight the summed terms by p(w) because here all w are equiprobable.)
According to the agent’s model, all in-room-agents are running the same decision-algorithm, and thus all agents observing the same color output the same decision. This constrains what W can contain. In particular, it only contains worlds w where if this agent is outputting a, then also all other agents (in rooms of the same color) are also outputting a.
The agent’s available actions are “accept bet” and “decline bet”. When the agent considers those worlds where it (and thus, all other agents-in-green) outputs “accept bet”, it calculates the total utility gained by xeroxed agents to be higher, than in those worlds where it output “decline bet”.
The agent outputs “accept bet”.
If the above is not “maximizing utility”, then I’m confused about what (you mean by) “maximizing utility”. Did this clarify anything?
It’s true that (if the rooms are appropriately sealed off from each other) the blobs-of-atoms in different rooms cannot causally affect each other. But given knowledge that all agents are exact copies of each other, the set of “possible” worlds is constrained to contain only {worlds where all agents (in rooms of the same color) output the same decision}. (I’m thinking very loosely in terms of something like Solomonoff induction here.) Thus it seems to me that {operating/deciding as if agents in other rooms “could” decide something different from each other} is like operating with the wrong set of “possible” worlds; i.e. like doing something wrong relative to Solomonoff induction, and/or having an incorrect model of reality.
Maybe: try Tabooing the word “affect”?
I’ve spent a lot of time and written a handful of posts (including one on the interaction between Solomonoff and SIA) building my ontology. Parts may be mistaken but I don’t believe it’s “confused”. Tabooing core concepts will just make it more tedious to explain, probably with no real benefit.
In particular, the only actual observations anyone has are of the form “I have observed X”, and that needs to be the input into Solomonoff. You can’t input a bird’s eye view because you don’t have one.
Anyway, it seems weird that being altruistic affects the agent’s decision as to a purely local bet. You end up with the same answer as me on that question, acting “as if” the probability was 90%, but in a convoluted manner.
Maybe you should taboo probability. What does it mean to say that the probability is 50%, if not that you’ll accept purely local bets with better odds and not worse odds? The only purpose of probability in my ontology is for predictions for betting purposes (or decision making purposes that map onto that). Maybe it is your notion of probability that is confused.
Thanks for the suggestions. Clearly there’s still a lot of potentially fruitful disagreement here, some of it possibly mineable for insights; but I’m going to put this stuff on the shelf for now. Anyway, thanks.