The infinite autoresponse example seems like it would be solved in practice by rational ignorance: after some sufficiently small number of autoresponses (say 5) people would not want to explicitly reason about the policy implications of the specific number of autoresponses they saw, so “5+ autoresponses” would be a single category for decisionmaking purposes. In that case the induction argument fails and “both people go to the place specified in the message as long as they observe 5+ autoresponses” is a Nash equilibrium.
Of course, this assumes people haven’t already accepted and internalized the logic of the induction argument, since then no further explicit reasoning would be necessary based on the observed number of autoresponses. But the induction argument presupposes that rational ignorance does not exist, so it is not valid when we add rational ignorance to our model.
so “5+ autoresponses” would be a single category for decisionmaking purposes
I agree that something in this direction could work, and plausibly captures something about how humans reason. However, I don’t feel satisfied. I would want to see the idea developed as part of a larger framework of bounded rationality.
UDT gives us a version of “never be harmed by information” which is really nice, as far as it goes. In the cases which UDT helps with, we don’t need to do anything tricky, where we carefully decide which information to look at—UDT simply isn’t harmed by the information, so we can think about everything from a unified perspective without hiding things from ourselves.
Unfortunately, as I’ve outlined in the appendix, UDT doesn’t help very much in this case. We could say that UDT guarantees that there’s no need for “rational ignorance” when it comes to observations (ie, no need to avoid observations), but fails to capture the “rational ignorance” of grouping events together into more course-grained events (eg “5+ auto responses”).
So if we had something like “UDT but for course-graining in addition to observations”, that would be really nice. Some way to deal with things such that you never wish you’d course-grained things.
Whereas the approach of actually course-graining things, seems a bit doomed to fragility and arbitrariness. It seems like you have to specify some procedure for figuring out when you’d want to course-grain. For example, maybe you start with only one event, and iteratively decide how to add details, splitting the one event into more events. But I feel pessimistic about this. I feel similarly pessimistic about the reverse, starting with a completely fine-grained model and iteratively grouping things together.
Of course, this assumes people haven’t already accepted and internalized the logic of the induction argument,
Fortunately, the induction argument involves both agents following along with the whole argument. If one agent doubts that the other thinks in this way, this can sort of stabilize things. It’s similar to the price-undercutting dynamic, where you want to charge slightly less than competitors, not as little as possible. If market participants have common knowledge of rationality, then this does amount to charging as little as possible; but of course, the main point of the post is to cast doubt on this kind of common knowledge. Doubts about how low your competitor will be willing to go can significantly increase prices from “as low as possible”.
Similarly, the induction argument really only shows that you want to stay home in slightly more cases than the other person. This means the only common-knowledge equilibrium is to stay home; but if we abandon the common-knowledge assumption, this doesn’t need to be the outcome.
I’ve been a longtime CK atheist (and have been an influence on Abram’s post), and your comment is in the shape of my current preferred approach. Unfortunately, rational ignorance seems to require CK that agents will engage in bounded thinking, and not be too rational!
(CK-regress like the above is very common and often non-obvious. It seems plausible that we must accept this regress and in fact humans need to be Created Already in Coordination, in analogy with Created Already in Motion)
I think it is at least possible to attain p-CK in the case that there are enough people who aren’t “inductively inclined”. This sort of friction from people who aren’t thinking too hard causes unbounded neuroticism to stop and allow coordination. I’m not yet sure if such friction is necessary for any agent or merely typical.
The infinite autoresponse example seems like it would be solved in practice by rational ignorance: after some sufficiently small number of autoresponses (say 5) people would not want to explicitly reason about the policy implications of the specific number of autoresponses they saw, so “5+ autoresponses” would be a single category for decisionmaking purposes. In that case the induction argument fails and “both people go to the place specified in the message as long as they observe 5+ autoresponses” is a Nash equilibrium.
Of course, this assumes people haven’t already accepted and internalized the logic of the induction argument, since then no further explicit reasoning would be necessary based on the observed number of autoresponses. But the induction argument presupposes that rational ignorance does not exist, so it is not valid when we add rational ignorance to our model.
I agree that something in this direction could work, and plausibly captures something about how humans reason. However, I don’t feel satisfied. I would want to see the idea developed as part of a larger framework of bounded rationality.
UDT gives us a version of “never be harmed by information” which is really nice, as far as it goes. In the cases which UDT helps with, we don’t need to do anything tricky, where we carefully decide which information to look at—UDT simply isn’t harmed by the information, so we can think about everything from a unified perspective without hiding things from ourselves.
Unfortunately, as I’ve outlined in the appendix, UDT doesn’t help very much in this case. We could say that UDT guarantees that there’s no need for “rational ignorance” when it comes to observations (ie, no need to avoid observations), but fails to capture the “rational ignorance” of grouping events together into more course-grained events (eg “5+ auto responses”).
So if we had something like “UDT but for course-graining in addition to observations”, that would be really nice. Some way to deal with things such that you never wish you’d course-grained things.
Whereas the approach of actually course-graining things, seems a bit doomed to fragility and arbitrariness. It seems like you have to specify some procedure for figuring out when you’d want to course-grain. For example, maybe you start with only one event, and iteratively decide how to add details, splitting the one event into more events. But I feel pessimistic about this. I feel similarly pessimistic about the reverse, starting with a completely fine-grained model and iteratively grouping things together.
Fortunately, the induction argument involves both agents following along with the whole argument. If one agent doubts that the other thinks in this way, this can sort of stabilize things. It’s similar to the price-undercutting dynamic, where you want to charge slightly less than competitors, not as little as possible. If market participants have common knowledge of rationality, then this does amount to charging as little as possible; but of course, the main point of the post is to cast doubt on this kind of common knowledge. Doubts about how low your competitor will be willing to go can significantly increase prices from “as low as possible”.
Similarly, the induction argument really only shows that you want to stay home in slightly more cases than the other person. This means the only common-knowledge equilibrium is to stay home; but if we abandon the common-knowledge assumption, this doesn’t need to be the outcome.
(Perhaps I will edit the post to add this point.)
I’ve been a longtime CK atheist (and have been an influence on Abram’s post), and your comment is in the shape of my current preferred approach. Unfortunately, rational ignorance seems to require CK that agents will engage in bounded thinking, and not be too rational!
(CK-regress like the above is very common and often non-obvious. It seems plausible that we must accept this regress and in fact humans need to be Created Already in Coordination, in analogy with Created Already in Motion)
I think it is at least possible to attain p-CK in the case that there are enough people who aren’t “inductively inclined”. This sort of friction from people who aren’t thinking too hard causes unbounded neuroticism to stop and allow coordination. I’m not yet sure if such friction is necessary for any agent or merely typical.