Please note that negative points to this post, or failure to respond will only provide further evidence that LW is guilty of confirmation bias.
This is not the only hypothesis that downvotes to this post or failures to respond provide evidence for. It also provides evidence, in the Bayesian sense, that people think you’re a troll, or that your writing is suboptimal, or that only a few people managed to see this post in the first place, or… etc.
Anyway, it is not entirely clear to me what you mean by “rationality,” but I’ll use a caricature of it, namely “use Bayes’ theorem and then do the thing that maximizes expected utility.”
One big problem is what your priors should be. Probably no human in the world actually uses Solomonoff induction (and it is still not entirely clear to me that this is a good idea), so whatever else they’re using is an opportunity for bias to creep in.
Another big problem is how you should actually use Bayes’ theorem in practice. Any given observation contains way more information in it than you can reasonably update on, so you need to make some modeling decisions and privilege certain kinds of information above others, then find some kind of reasonable procedure for estimating likelihood ratios, and these are all more opportunities for bias to creep in.
And a third big problem is how to actually compute utilities. Before you do this you need to address the question of whether humans even have utility functions, whether they should aspire to have utility functions (whatever “should” means here), and if so, what your utility function is…
These are all big problems. In response I would say that ideal decision-making is not a thing that we can do, but understanding more about what the ideal looks like can help us move our decision-making closer to ideal.
This is not the only hypothesis that downvotes to this post or failures to respond provide evidence for. It also provides evidence, in the Bayesian sense, that people think you’re a troll, or that your writing is suboptimal, or that only a few people managed to see this post in the first place, or… etc.
Anyway, it is not entirely clear to me what you mean by “rationality,” but I’ll use a caricature of it, namely “use Bayes’ theorem and then do the thing that maximizes expected utility.”
One big problem is what your priors should be. Probably no human in the world actually uses Solomonoff induction (and it is still not entirely clear to me that this is a good idea), so whatever else they’re using is an opportunity for bias to creep in.
Another big problem is how you should actually use Bayes’ theorem in practice. Any given observation contains way more information in it than you can reasonably update on, so you need to make some modeling decisions and privilege certain kinds of information above others, then find some kind of reasonable procedure for estimating likelihood ratios, and these are all more opportunities for bias to creep in.
And a third big problem is how to actually compute utilities. Before you do this you need to address the question of whether humans even have utility functions, whether they should aspire to have utility functions (whatever “should” means here), and if so, what your utility function is…
These are all big problems. In response I would say that ideal decision-making is not a thing that we can do, but understanding more about what the ideal looks like can help us move our decision-making closer to ideal.