I’m a bit confused, so please take this as exploratory rather than expository. What prevents the ass (or the decision process in general) from:
1) Having a pre-computed estimate of (a) how long it has before it would starve, (b) how the error of its size determinations depends on how long it spends observing, and (c) how much error in that estimates it cares about; and then,
2) Stop observing/deciding when the first limit is close (but far enough to still have time to eat!) or when the error of the difference between the two directions falls below the limit it cares about. (In the strictest interpretation of the question, this second step is not necessary.)
When I say “estimate” in step one, I mean a very wide pre-computed interval, not some precise computation. I don’t know exactly how long it’ll take me to die from hunger, but it’s clear that in a similar situation at some point I’d be hungry enough that I can anticipate without needing any complicated logic that I would die from not eating and stop comparing the choices. In that case you just need a way to distinguish them to pick one (i.e., left and right, not bigger and smaller), and you do so with any arbitrary rule (e.g., lexicographic ordering).
I effect, the ass has a not just the binary problem of choosing left and right, it has the ternary (meta-)problem of choosing between going left, going right, or searching for a better method of picking which direction is better. The first two may remain symmetrical, but at some point the third choice (“keep thinking”) will reach negative expected utility (trivially, when you anticipate to starve, but in real life you might also decide that spending another hungry hour deliberating over some very small amount of hay is not worth it).
I’m sure similar decision problems can be posed, where the tradeoffs are balanced such that you still have an issue, but this particular formulation seems almost as silly as claiming the ass will starve because of Zeno’s arrow paradox.
I’m a bit confused, so please take this as exploratory rather than expository. What prevents the ass (or the decision process in general) from:
1) Having a pre-computed estimate of (a) how long it has before it would starve, (b) how the error of its size determinations depends on how long it spends observing, and (c) how much error in that estimates it cares about; and then,
2) Stop observing/deciding when the first limit is close (but far enough to still have time to eat!) or when the error of the difference between the two directions falls below the limit it cares about. (In the strictest interpretation of the question, this second step is not necessary.)
When I say “estimate” in step one, I mean a very wide pre-computed interval, not some precise computation. I don’t know exactly how long it’ll take me to die from hunger, but it’s clear that in a similar situation at some point I’d be hungry enough that I can anticipate without needing any complicated logic that I would die from not eating and stop comparing the choices. In that case you just need a way to distinguish them to pick one (i.e., left and right, not bigger and smaller), and you do so with any arbitrary rule (e.g., lexicographic ordering).
I effect, the ass has a not just the binary problem of choosing left and right, it has the ternary (meta-)problem of choosing between going left, going right, or searching for a better method of picking which direction is better. The first two may remain symmetrical, but at some point the third choice (“keep thinking”) will reach negative expected utility (trivially, when you anticipate to starve, but in real life you might also decide that spending another hungry hour deliberating over some very small amount of hay is not worth it).
I’m sure similar decision problems can be posed, where the tradeoffs are balanced such that you still have an issue, but this particular formulation seems almost as silly as claiming the ass will starve because of Zeno’s arrow paradox.