Please note that negative points to this post, or failure to respond will only provide further evidence that LW is guilty of confirmation bias. Its sweet when you get to use cognitive biases against those that try to weed them out. (Yes, I’m trying to goad someone into answering, but only because I really want to know your answer, not because I’m trying to troll).
This is usually considered a very bad sign and to be against community norms and/or ethics. Many people would/will downvote your comment exclusively because of the quoted paragraph. My first impulse was to do so, but I’m overriding it in favor of this response and in light of the rest of your comment, which seems like a habit of reasoning to be strongly encouraged, regardless of other things I’ll get to in a minute.
So, first, before any productive discussion of this can be done (edit: from my end, at least), I have to be reasonably confident that you’ve read and understood “What Do We Mean By “Rationality”?”, which establishes as two separate functions what I believe you’re referring to when you say “Rationality as a (near-)universal theory on decision-making.”
Alright. Now, assuming you understand the point of that post and the content of “rationality”, could you help me pinpoint your exact question? To me, “How has Rationality confronted its most painful weaknesses?” and “What are rationality’s weak points?” are incoherent questions—they seem Mysterious—to the same extent that one could ask the same questions of thinking, of existence, of souls, of the Peano Axioms, or of basically anything that requires more context to properly compute those questions for.
If you’re trying to question the usefulness of the function “be instrumentally rational”, then the most salient weakness is that it is theoretically possible that a human could attempt to be instrumentally rational, end up applying it inexactly or inefficiently, waste time, not recurse to a high enough stack, or a slew of other mistakes.
The second most important is that sometimes, even a human properly applying the principles of instrumental rationality will find out that their values are more easily fulfilled by doing something else and not applying instrumental rationality—at which point, because they are applying instrumental rationality and the function “be instrumentally rational” is a polymorphic function, the next instrumentally rational thing to do is to not be instrumentally rational anymore, since it is what maximizes “winning”, which as described in the first link above is what instrumental rationality strives for. In this case, using instrumental rationality in the first place if you were already doing the other thing that maximizes value could be considered an opportunity-cost virus, since it consumed time and mental energy and possibly other resources in a quest to figure out that you shouldn’t have done this.
However, if you look at the odds using the tools at your disposal, it seems extremely unlikely that it would be the case that being rational is less efficient towards achieving values than other strategies, since optimizing for expected utility, over all possible strategies in all possible worlds, is mathematically the strategy most likely to achieve optimal utility. This sounds like a trivial theorem that follows from standard peano axioms, but I don’t recall seeing any example of this particular statement being formalized like that.
By simple probability axioms, it is even more unlikely that what you’re already doing is better than applying instrumental rationality and finding out the actual non-rational strategy that is optimal for your values, let alone compared against the expected utility of the probabilistic expectations of instrumental rationality itself being optimal versus the low probability of it leading to some other non-rational optimal strategy.
Basically, it seems like the only relevant weaknesses of applied instrumental rationality are: computational (in)tractability, unlikely chance that some non-expected-winning-maximizing strategy might actually be better for maximizing winning (which can’t be known reliably in advance anyway unless you happen to defy all probability and by hypothesis already contain the true knowledge of the true optimal strategy for the agent your mind implements), and some difficulties or risks during implementation by us humans as a result of bugs and inefficiencies in human hardware.
When this is applied in a meta manner, where you rationally attempt to choose which strategies instead of applying a naive version of rationality, such as many of the ways described in the Sequences on LessWrong, then as per bayesian updating and the tools available to us, this seems to be probabilistically the most effective possible strategy for human hardware. Which means that on a statistical level, the only weakness of instrumental rationality is that it’s hard to understand correctly, hard to actually implement, and hard to apply. The other responses to your comment have more details on many ways human hardware can fail to be optimal at this or have/cause various important problems.
First off:
This is usually considered a very bad sign and to be against community norms and/or ethics. Many people would/will downvote your comment exclusively because of the quoted paragraph. My first impulse was to do so, but I’m overriding it in favor of this response and in light of the rest of your comment, which seems like a habit of reasoning to be strongly encouraged, regardless of other things I’ll get to in a minute.
So, first, before any productive discussion of this can be done (edit: from my end, at least), I have to be reasonably confident that you’ve read and understood “What Do We Mean By “Rationality”?”, which establishes as two separate functions what I believe you’re referring to when you say “Rationality as a (near-)universal theory on decision-making.”
Alright. Now, assuming you understand the point of that post and the content of “rationality”, could you help me pinpoint your exact question? To me, “How has Rationality confronted its most painful weaknesses?” and “What are rationality’s weak points?” are incoherent questions—they seem Mysterious—to the same extent that one could ask the same questions of thinking, of existence, of souls, of the Peano Axioms, or of basically anything that requires more context to properly compute those questions for.
If you’re trying to question the usefulness of the function “be instrumentally rational”, then the most salient weakness is that it is theoretically possible that a human could attempt to be instrumentally rational, end up applying it inexactly or inefficiently, waste time, not recurse to a high enough stack, or a slew of other mistakes.
The second most important is that sometimes, even a human properly applying the principles of instrumental rationality will find out that their values are more easily fulfilled by doing something else and not applying instrumental rationality—at which point, because they are applying instrumental rationality and the function “be instrumentally rational” is a polymorphic function, the next instrumentally rational thing to do is to not be instrumentally rational anymore, since it is what maximizes “winning”, which as described in the first link above is what instrumental rationality strives for. In this case, using instrumental rationality in the first place if you were already doing the other thing that maximizes value could be considered an opportunity-cost virus, since it consumed time and mental energy and possibly other resources in a quest to figure out that you shouldn’t have done this.
However, if you look at the odds using the tools at your disposal, it seems extremely unlikely that it would be the case that being rational is less efficient towards achieving values than other strategies, since optimizing for expected utility, over all possible strategies in all possible worlds, is mathematically the strategy most likely to achieve optimal utility. This sounds like a trivial theorem that follows from standard peano axioms, but I don’t recall seeing any example of this particular statement being formalized like that.
By simple probability axioms, it is even more unlikely that what you’re already doing is better than applying instrumental rationality and finding out the actual non-rational strategy that is optimal for your values, let alone compared against the expected utility of the probabilistic expectations of instrumental rationality itself being optimal versus the low probability of it leading to some other non-rational optimal strategy.
Basically, it seems like the only relevant weaknesses of applied instrumental rationality are: computational (in)tractability, unlikely chance that some non-expected-winning-maximizing strategy might actually be better for maximizing winning (which can’t be known reliably in advance anyway unless you happen to defy all probability and by hypothesis already contain the true knowledge of the true optimal strategy for the agent your mind implements), and some difficulties or risks during implementation by us humans as a result of bugs and inefficiencies in human hardware.
When this is applied in a meta manner, where you rationally attempt to choose which strategies instead of applying a naive version of rationality, such as many of the ways described in the Sequences on LessWrong, then as per bayesian updating and the tools available to us, this seems to be probabilistically the most effective possible strategy for human hardware. Which means that on a statistical level, the only weakness of instrumental rationality is that it’s hard to understand correctly, hard to actually implement, and hard to apply. The other responses to your comment have more details on many ways human hardware can fail to be optimal at this or have/cause various important problems.