Thanks, the floor/ceiling distinction is helpful.
I think “ceilings as they exist in reality” is my main interest in this post. Specifically, I’m interested in the following:
any resource-bound agent will have ceilings, so an account of embedded rationality needs a “theory of having good ceilings”
a “theory of having good ceilings” would be different from the sorts of “theories” we’re used to thinking about, involving practical concerns at the fundamental desiderata level rather than as a matter of implementing an ideal after it’s been specified
In more detail: it’s one thing to be able to assess quick heuristics, and it’s another (and better) one to be able to assess quick heuristics quickly. It’s possible (maybe) to imagine a convenient situation where the theory of each “speed class” among fast decisions is compressible enough to distill down to something which can be run in that speed class and still provide useful guidance. In this case there’s a possibility for the theory to tell us why our behavior as a whole is justified, by explaining how our choices are “about as good as can be hoped for” during necessarily fast/simple activity that can’t possibly meet our more powerful and familiar notions of decision rationality.
However, if we can’t do this, it seems like we face an exploding backlog of justification needs: every application of a fast heuristic now requires a slow justification pass, but we’re constantly applying fast heuristics and there’s no room for the slow pass to catch up. So maybe a stronger agent could justify what we do, but we couldn’t.
I expect helpful theories here to involve distilling-into-fast-enough-rules on a fundamental level, so that “an impractically slow but working version of the theory” is actually a contradiction in terms.
Thanks, the floor/ceiling distinction is helpful.
I think “ceilings as they exist in reality” is my main interest in this post. Specifically, I’m interested in the following:
any resource-bound agent will have ceilings, so an account of embedded rationality needs a “theory of having good ceilings”
a “theory of having good ceilings” would be different from the sorts of “theories” we’re used to thinking about, involving practical concerns at the fundamental desiderata level rather than as a matter of implementing an ideal after it’s been specified
In more detail: it’s one thing to be able to assess quick heuristics, and it’s another (and better) one to be able to assess quick heuristics quickly. It’s possible (maybe) to imagine a convenient situation where the theory of each “speed class” among fast decisions is compressible enough to distill down to something which can be run in that speed class and still provide useful guidance. In this case there’s a possibility for the theory to tell us why our behavior as a whole is justified, by explaining how our choices are “about as good as can be hoped for” during necessarily fast/simple activity that can’t possibly meet our more powerful and familiar notions of decision rationality.
However, if we can’t do this, it seems like we face an exploding backlog of justification needs: every application of a fast heuristic now requires a slow justification pass, but we’re constantly applying fast heuristics and there’s no room for the slow pass to catch up. So maybe a stronger agent could justify what we do, but we couldn’t.
I expect helpful theories here to involve distilling-into-fast-enough-rules on a fundamental level, so that “an impractically slow but working version of the theory” is actually a contradiction in terms.