I take the ‘lots of random nodes’ possibility to be addressed by this point:
And this point generalises to arbitrarily complex/realistic decision trees, with more choice-nodes, more chance-nodes, and more options. Agents with a model of future trades can use their model to predict what they’d do conditional on reaching each possible choice-node, and then use those predictions to determine the nature of the options available to them at earlier choice-nodes. The agent’s model might be defective in various ways (e.g. by getting some probabilities wrong, or by failing to predict that some sequences of trades will be available) but that won’t spur the agent to change its preferences, because the dilemma from my previous comment recurs: if the agent is aware that some lottery is available, it won’t choose any dispreferred lottery; if the agent is unaware that some lottery is available and chooses a dispreferred lottery, the agent’s lack of awareness means it won’t be spurred by this fact to change its preferences. To get over this dilemma, you still need the ‘non-myopic optimiser deciding the preferences of a myopic agent’ setting, and my previous points apply: results from that setting don’t vindicate coherence arguments, and we humans as non-myopic optimisers could decide to create artificial agents with incomplete preferences.
Can you explain why you think that doesn’t work?
To elaborate a little more, introducing random nodes allows for the possibility that the agent ends up with some outcome that they disprefer to the outcome that they would have gotten (as a matter of fact, unbeknownst to the agent) by making different choices. But that’s equally true of agents with complete preferences.
I take the ‘lots of random nodes’ possibility to be addressed by this point:
Can you explain why you think that doesn’t work?
To elaborate a little more, introducing random nodes allows for the possibility that the agent ends up with some outcome that they disprefer to the outcome that they would have gotten (as a matter of fact, unbeknownst to the agent) by making different choices. But that’s equally true of agents with complete preferences.
I intended for my link to point to the comment you linked to, oops.
I’ve responded here, I think it’s better to just keep one thread of argument, in a place where there is more necessary context.