What do you mean? It seems to me like the “one true rationality” would be the perfect and unbiased strategy that others tried to emulate, but I’m not sure how it wouldn’t exist?
Any cognitive strategy is a bias, sort of. Take Occam’s razor as illustration: if the truth is complicated, starting with Occamian prior is an obstacle. If the the laws of nature were complicated, Occam’s razor would be classified among other cognitive biases. We don’t call it a bias because we reserve that word for errors, but it is pretty hard to give a non-circular precise definition of “error”.
perfect
Are you sure that it is not the case that for each cognitive strategy there is a better one, for any reasonable metric?
Take Occam’s razor as illustration: if the truth is complicated, starting with Occamian prior is an obstacle. If the the laws of nature were complicated, Occam’s razor would be classified among other cognitive biases.
There are more ways to be complicated than there are to be simple. Starting with a complicated prior doesn’t (EDIT: necessarily) get you closer to the complicated truth than starting with a simple prior does. Even if the truth is complicated, a complicated prior can be wronger than the simple one.
Sorry, maybe I misread. The line I quoted above seemed to suggest that “if the laws of nature were complicated,” then we would be better off having priors that favored complicated beliefs over simple ones — or at least considered them equal — rather than an Occam prior which favors simple beliefs.
I have suggested that we would be better off having priors which favour the exact way of how the laws are complicated. Of course, a general complicated prior wouldn’t do the job.
In addition to Prase’s comment on the possibility of an unbounded chain of strategies (and building off of what I think shminux is saying), I’m also wondering (I’m not sure of this) if bounded cognitive strategies are strictly monotonically increasing? i.e.( For all strategies X and Y, X>Y or Y>X). It seems like lateral moves could exist given that we need to use bounded strategies—certain biases can only be corrected to a certain degree using feasible methods, and mediation of biases rests on adopting certain heuristics that are going to be better optimized for some minds than others. Given two strategies A and B that don’t result in a Perfect Bayesian, it certainly seems possible to me that EU(Adopt A) = EU(Adopt B) and A and B dominate all other feasible strategies by making a different set of tradeoffs at equal cost (relative to a Perfect Bayesian).
“One size fits all” approach rarely works. Like with CDT vs EDT (I will consider the TDT more seriously when it has more useful content than just “do whatever it takes to win”). Eh, seems like I’m still stuck at the summit on this one.
There can be less.
What do you mean? It seems to me like the “one true rationality” would be the perfect and unbiased strategy that others tried to emulate, but I’m not sure how it wouldn’t exist?
Any cognitive strategy is a bias, sort of. Take Occam’s razor as illustration: if the truth is complicated, starting with Occamian prior is an obstacle. If the the laws of nature were complicated, Occam’s razor would be classified among other cognitive biases. We don’t call it a bias because we reserve that word for errors, but it is pretty hard to give a non-circular precise definition of “error”.
Are you sure that it is not the case that for each cognitive strategy there is a better one, for any reasonable metric?
There are more ways to be complicated than there are to be simple. Starting with a complicated prior doesn’t (EDIT: necessarily) get you closer to the complicated truth than starting with a simple prior does. Even if the truth is complicated, a complicated prior can be wronger than the simple one.
Yes, a complicated prior can be wronger than the simple one and usually is. I am sure I haven’t disputed that.
Sorry, maybe I misread. The line I quoted above seemed to suggest that “if the laws of nature were complicated,” then we would be better off having priors that favored complicated beliefs over simple ones — or at least considered them equal — rather than an Occam prior which favors simple beliefs.
I have suggested that we would be better off having priors which favour the exact way of how the laws are complicated. Of course, a general complicated prior wouldn’t do the job.
It seems to me that there would be priors that are useful and those that aren’t would biases, and that there would be optimal priors to have.
I don’t see why there should be a better strategy for every strategy, either, because one would finally be perfect.
In addition to Prase’s comment on the possibility of an unbounded chain of strategies (and building off of what I think shminux is saying), I’m also wondering (I’m not sure of this) if bounded cognitive strategies are strictly monotonically increasing? i.e.( For all strategies X and Y, X>Y or Y>X). It seems like lateral moves could exist given that we need to use bounded strategies—certain biases can only be corrected to a certain degree using feasible methods, and mediation of biases rests on adopting certain heuristics that are going to be better optimized for some minds than others. Given two strategies A and B that don’t result in a Perfect Bayesian, it certainly seems possible to me that EU(Adopt A) = EU(Adopt B) and A and B dominate all other feasible strategies by making a different set of tradeoffs at equal cost (relative to a Perfect Bayesian).
“One size fits all” approach rarely works. Like with CDT vs EDT (I will consider the TDT more seriously when it has more useful content than just “do whatever it takes to win”). Eh, seems like I’m still stuck at the summit on this one.