How has Rationality, as a universal theory (or near-universal) on decision making, confronted its most painful weaknesses? What are rationality’s weak points? The more broad a theory is claimed to be, the more important it seems to really test the theory’s weaknesses—that is why I assume you bring up religion, but the same standard should apply to rationality. This is not a cute question from a religious person, more of an intellectual inquiry from a person hoping to learn. In honor of the grand-daddy of cognitive biases, confirmation bias, doesn’t rational choice theory need to be vetted?
HungryTurtle makes an attempt to get to this question, but he gets too far into the weeds—this allowed LW to simply compare the “cons” of religion with the “cons” of rationality—this is a silly inquiry—I don’t care how the weaknesses of rationality compares to the weaknesses of Judaism because rational theory, if universally applicable with no weaknesses, should be tested on the basis of that claim alone, and not its weaknesses relative to some other theory.
Please note that negative points to this post, or failure to respond will only provide further evidence that LW is guilty of confirmation bias. Its sweet when you get to use cognitive biases against those that try to weed them out. (Yes, I’m trying to goad someone into answering, but only because I really want to know your answer, not because I’m trying to troll).
On the distant chance that you’re actually attempting to be reasonable and are just messing it up, I downvoted this post because I automatically downvote everything that tries to Poison the Well against being downvoted. Being preemptively accused of confirmation bias is itself sufficient reason to downvote.
Thanks, EY. I am asking a real question in that i want to know what people think of the question.
As a person that does not think rationality is as useful or as universal as people do on this site, I am at a disadvantage in that i’m in the minority here, however, I’m still posting/reading to question myself through engaging with those I disagree with. I seek the community’s perspective, not to necessarily believe it or label it correct/wrong, but simply to understand it. My personal experience (with this name and old ones) has been that people generally do not respond to viewpoints that are contrary to the conventional thought—this is problematic because this community is best-positioned to defend weaknesses (claimed or real) regarding rationality. Looking at it another way, if I believe that rationality has serious flaws, I need to be able to defend myself against YOUR best arguments, but can only do that if someone engages with me so I understand those arguments first.
The point of my post was to ask a serious question and poke you guys with a stick, hoping the poke elicits a response to the question—frankly it worked, and now i will swallow, learn from and hopefully respond to the various comments. So long as negative points don’t prevent me from reading and posting, i could care less about what points i have—I also note that I was clear about my intentions about wanting to goad an answer.
Perhaps you disagree with my methods, but since my goal was to hear multiple perspectives, and got more than I usually do, i see its a win for instrumental rationality. And, if my follow-ups suggest I’m not an troll/general a**, perhaps I wont have to use dirty tricks going forward!
Please note that negative points to this post, or failure to respond will only provide further evidence that LW is guilty of confirmation bias. Its sweet when you get to use cognitive biases against those that try to weed them out.
I see these wonky pseudo-threats around the site a lot and they’re really confusing to me. Of course I’m biased! I’m a human! Just because I’m hanging around this site doesn’t mean I’ve cleared all my biases and now become a perfect rational agent.
On one hand, I do want to weed out cognitive biases in situations where they’re hindering my decision-making in important areas of my life. On the other hand, I still have a lot of information to sift through in the real world, so maybe some of the shortcuts my brain uses are pretty handy to keep around. One example of these would be “talk to people that don’t threaten you; discourage people that do.” Sure, this might be filtering out some potentially good discussions, but it still seems like a pretty good heuristic to me, especially out there in scary meatspace. ^_^
I see these wonky pseudo-threats around the site a lot and they’re really confusing to me.
FWIW, I mostly understand them to be attempts at manipulating listeners into avoiding a particular behavior by associating that behavior with a low-status condition, coupled with the belief that on LW being biased is seen as a low-status condition.
But hopefully most LWers don’t go around thinking being biased in any way is awful and they must be completely unbiased at all times. Right? So I’m not sure where the manipulative people pick up that idea. I definitely think we should minimize bias when making important decisions, but when deciding what posts to read/reply to? I will proudly use a shortcut!
Please note that negative points to this post, or failure to respond will only provide further evidence that LW is guilty of confirmation bias.
This is not the only hypothesis that downvotes to this post or failures to respond provide evidence for. It also provides evidence, in the Bayesian sense, that people think you’re a troll, or that your writing is suboptimal, or that only a few people managed to see this post in the first place, or… etc.
Anyway, it is not entirely clear to me what you mean by “rationality,” but I’ll use a caricature of it, namely “use Bayes’ theorem and then do the thing that maximizes expected utility.”
One big problem is what your priors should be. Probably no human in the world actually uses Solomonoff induction (and it is still not entirely clear to me that this is a good idea), so whatever else they’re using is an opportunity for bias to creep in.
Another big problem is how you should actually use Bayes’ theorem in practice. Any given observation contains way more information in it than you can reasonably update on, so you need to make some modeling decisions and privilege certain kinds of information above others, then find some kind of reasonable procedure for estimating likelihood ratios, and these are all more opportunities for bias to creep in.
And a third big problem is how to actually compute utilities. Before you do this you need to address the question of whether humans even have utility functions, whether they should aspire to have utility functions (whatever “should” means here), and if so, what your utility function is…
These are all big problems. In response I would say that ideal decision-making is not a thing that we can do, but understanding more about what the ideal looks like can help us move our decision-making closer to ideal.
If by “Rationality, as a universal theory (or near-universal) on decision making” you mean using Bayes’ Theorem as a way of determining the likelihood of various potential events and consequently estimating the expected value of various courses of action, which is something that “rationality” sometimes gets used to mean on this site, I’d say (as many have said before me) that one big weakness is the lack of reliable priors. A mechanism for precisely calculating how much I should update P(x) from an existing made-up value based on things I don’t necessarily know doesn’t provide me with much guidance in my day-to-day life. Another big weakness is computational intractability.
If you mean more broadly making decisions based on the estimated expected value of various courses of action, I suppose the biggest weakness is again computational intractability. Which in turn potentially leads to sloppiness like making simplifying assumptions that are so radically at odds with my real environment that my estimates of value are just completely wrong.
If you mean something else, it might be useful if you said what you mean more precisely.
It’s worth noting explicitly that these weaknesses are not themselves legitimate grounds for choosing some other approach that shares the same weaknesses. For example, simply making shit up typically results in estimates of value which are even more wrong. But you explicitly asked about weaknesses in isolation, rather than reasons to pick one decision theory over another.
Thanks. I don’t mean any weaknesses in particular, the idea laid out by EY was to confront your greatest weaknesses, so that is something for those that follow the theory to look into—I’m just exploring :).
I guess what I’m not following is this idea of “choosing” an approach. Implicit in your answer I think is the idea that there is a “best” approach that must be discovered among the various theories on living life—why does the existence of theory that is the “best” indicative that it is universally applicable? The goal is to “understand reality,” not choose a methodology that is the “best” under the assumption that the “best” theory can be then be followed universally.
Put differently, to choose rationality as a universal theory notwithstanding its flaws, you’re saying more than “its the “best” of all the available theories—I think you must also believe that the idea of having a set theory to guide life, notwithstanding its flaws, is the best way to go about understanding reality. What is the basis for the belief in the second prong?
Saying “well i have to make a decision,” so i need to find the best theory doesn’t cut it. It is clear there are times we must make a decision, but you are left with a similar question—why are humans entitled to know what to do simply because they need to make a decision? Perhaps in “reality” is there is no answer (or no answer within the limits of human comprehension) -- it is true you’re stuck not knowing what to do but you surely have a better view of reality (if that is the reality).
The implications of this are important. If you agree that rational choice theory is the “best” of all theories, but also agree that there is (or may be) a distinction between “choosing/applying a set theory” and “understanding reality” to the greatest extent humanly possible, it suggests one would need more than rationality to truly understand reality.
Implicit in your answer I think is the idea that there is a “best” approach that must be discovered among the various theories on living life
No, I don’t think that’s implied. We do make decisions, and some processes for making decisions lead to different results than other processes, and some results are better than others. It doesn’t follow that there’s a single best approach, or that such an approach is discoverable, or that it’s worthwhile to search for it.
The goal is to “understand reality,”
Is that the goal? I’m not sure it is.
I think you must also believe that the idea of having a set theory to guide life, notwithstanding its flaws, is the best way to go about understanding reality.
As above, I neither agree that understanding reality is a singularly important terminal goal, nor that finding the “best theory” for achieving my goals is a particularly high-priority instrumental goal.
So, mostly, I feel like this entire comment is orthogonal to anything I actually said, and you’re putting a lot of words in my mouth here. You might do better to just articulate what you believe without trying to frame it as a reply to my comment.
(...) it suggests one would need more than rationality to truly understand reality.
I’m not sure what you mean. In such a case, rationality dictates that IFF you truly want to understand reality, you should find that “more” that is needed and use it instead of rationality. This is the rational course of action. Therefore it is rational to do that thing “instead of” doing rationality. Thus being rational means doing this thing that leads to understanding reality.
This seems to imply that if you keep recursively applying rationality to your own application of rationality, you end up finding that that which leads with highest probability to the desired goal is always rationality.
Please note that negative points to this post, or failure to respond will only provide further evidence that LW is guilty of confirmation bias. Its sweet when you get to use cognitive biases against those that try to weed them out. (Yes, I’m trying to goad someone into answering, but only because I really want to know your answer, not because I’m trying to troll).
This is usually considered a very bad sign and to be against community norms and/or ethics. Many people would/will downvote your comment exclusively because of the quoted paragraph. My first impulse was to do so, but I’m overriding it in favor of this response and in light of the rest of your comment, which seems like a habit of reasoning to be strongly encouraged, regardless of other things I’ll get to in a minute.
So, first, before any productive discussion of this can be done (edit: from my end, at least), I have to be reasonably confident that you’ve read and understood “What Do We Mean By “Rationality”?”, which establishes as two separate functions what I believe you’re referring to when you say “Rationality as a (near-)universal theory on decision-making.”
Alright. Now, assuming you understand the point of that post and the content of “rationality”, could you help me pinpoint your exact question? To me, “How has Rationality confronted its most painful weaknesses?” and “What are rationality’s weak points?” are incoherent questions—they seem Mysterious—to the same extent that one could ask the same questions of thinking, of existence, of souls, of the Peano Axioms, or of basically anything that requires more context to properly compute those questions for.
If you’re trying to question the usefulness of the function “be instrumentally rational”, then the most salient weakness is that it is theoretically possible that a human could attempt to be instrumentally rational, end up applying it inexactly or inefficiently, waste time, not recurse to a high enough stack, or a slew of other mistakes.
The second most important is that sometimes, even a human properly applying the principles of instrumental rationality will find out that their values are more easily fulfilled by doing something else and not applying instrumental rationality—at which point, because they are applying instrumental rationality and the function “be instrumentally rational” is a polymorphic function, the next instrumentally rational thing to do is to not be instrumentally rational anymore, since it is what maximizes “winning”, which as described in the first link above is what instrumental rationality strives for. In this case, using instrumental rationality in the first place if you were already doing the other thing that maximizes value could be considered an opportunity-cost virus, since it consumed time and mental energy and possibly other resources in a quest to figure out that you shouldn’t have done this.
However, if you look at the odds using the tools at your disposal, it seems extremely unlikely that it would be the case that being rational is less efficient towards achieving values than other strategies, since optimizing for expected utility, over all possible strategies in all possible worlds, is mathematically the strategy most likely to achieve optimal utility. This sounds like a trivial theorem that follows from standard peano axioms, but I don’t recall seeing any example of this particular statement being formalized like that.
By simple probability axioms, it is even more unlikely that what you’re already doing is better than applying instrumental rationality and finding out the actual non-rational strategy that is optimal for your values, let alone compared against the expected utility of the probabilistic expectations of instrumental rationality itself being optimal versus the low probability of it leading to some other non-rational optimal strategy.
Basically, it seems like the only relevant weaknesses of applied instrumental rationality are: computational (in)tractability, unlikely chance that some non-expected-winning-maximizing strategy might actually be better for maximizing winning (which can’t be known reliably in advance anyway unless you happen to defy all probability and by hypothesis already contain the true knowledge of the true optimal strategy for the agent your mind implements), and some difficulties or risks during implementation by us humans as a result of bugs and inefficiencies in human hardware.
When this is applied in a meta manner, where you rationally attempt to choose which strategies instead of applying a naive version of rationality, such as many of the ways described in the Sequences on LessWrong, then as per bayesian updating and the tools available to us, this seems to be probabilistically the most effective possible strategy for human hardware. Which means that on a statistical level, the only weakness of instrumental rationality is that it’s hard to understand correctly, hard to actually implement, and hard to apply. The other responses to your comment have more details on many ways human hardware can fail to be optimal at this or have/cause various important problems.
Well, here are my assessments of rationalities weakest points, from what I have read on Less Wrong so far. (That means some of these use “Rationality” when “Less Wrong” may be better used, which could be a crippling flaw in my list of weaknesses.) It sounds like you may be looking for something like these:
1: Several aspects of rationality require what appears to be a significant amount of math and philosophy to get right. If you don’t understand that math or the philosophy, you aren’t really being a rationalist, you’re more just following the lead of other rationalists because they seem smart, or possibly because you liked their books, and possibly cheerleading them on on occasion. Rationality needs more simpler explanations that can be easily followed by people who are not in the top 1% of brain processing.
2: Rationality also requires quick explanations. Some people who study rationality realize this: When attempting to automate their decision theory, consider the problem “How do we make a computer that doesn’t turn the entire universe into more computer to be absolutely sure that 2+2 is 4?” is considered a substantial problem. Quick answers tend to be eschewed, for ever more levels of clarity which take an increasing large amount of time, and even when confronted with the obvious problem that going for nothing but clarity causes, Rationalists consider it to be something that requires even more research into how to be clear.
3: Rationality decision problems really don’t rise above the level of religion. Consider that in many rationality decision problems, the first thing that Rationalists do is presuppose “Omega” who is essentially “God” with the serial numbers filed off. Infinite (Or extremely high) utility and disutility are thrown around like so many parallels of Heaven and Hell. This makes a lot of rationality problems the kind of thing that those boring philosophers of the past (that Rationalists are so quick to eschew) have discussed ad nauseum.
It’s hard to really grasp the scope of these problems (I’m one of those people in part 1 that doesn’t quite get some of the mathier bits sometimes.) And I’m not sure any of them are fatal to rationality as a decision making method, since I still read the site and consider it trustworthy. But if you were going to look for weak points, you could start at any of these.
Leaving aside the rest of this comment, please note that in many cases we throw around large numbers and high probabilities in order to obviously break fragile systems that wouldn’t break as obviously if we threw small numbers and middle probabilities.
How has Rationality, as a universal theory (or near-universal) on decision making, confronted its most painful weaknesses? What are rationality’s weak points? The more broad a theory is claimed to be, the more important it seems to really test the theory’s weaknesses—that is why I assume you bring up religion, but the same standard should apply to rationality. This is not a cute question from a religious person, more of an intellectual inquiry from a person hoping to learn. In honor of the grand-daddy of cognitive biases, confirmation bias, doesn’t rational choice theory need to be vetted?
HungryTurtle makes an attempt to get to this question, but he gets too far into the weeds—this allowed LW to simply compare the “cons” of religion with the “cons” of rationality—this is a silly inquiry—I don’t care how the weaknesses of rationality compares to the weaknesses of Judaism because rational theory, if universally applicable with no weaknesses, should be tested on the basis of that claim alone, and not its weaknesses relative to some other theory.
Please note that negative points to this post, or failure to respond will only provide further evidence that LW is guilty of confirmation bias. Its sweet when you get to use cognitive biases against those that try to weed them out. (Yes, I’m trying to goad someone into answering, but only because I really want to know your answer, not because I’m trying to troll).
On the distant chance that you’re actually attempting to be reasonable and are just messing it up, I downvoted this post because I automatically downvote everything that tries to Poison the Well against being downvoted. Being preemptively accused of confirmation bias is itself sufficient reason to downvote.
Thanks, EY. I am asking a real question in that i want to know what people think of the question.
As a person that does not think rationality is as useful or as universal as people do on this site, I am at a disadvantage in that i’m in the minority here, however, I’m still posting/reading to question myself through engaging with those I disagree with. I seek the community’s perspective, not to necessarily believe it or label it correct/wrong, but simply to understand it. My personal experience (with this name and old ones) has been that people generally do not respond to viewpoints that are contrary to the conventional thought—this is problematic because this community is best-positioned to defend weaknesses (claimed or real) regarding rationality. Looking at it another way, if I believe that rationality has serious flaws, I need to be able to defend myself against YOUR best arguments, but can only do that if someone engages with me so I understand those arguments first.
The point of my post was to ask a serious question and poke you guys with a stick, hoping the poke elicits a response to the question—frankly it worked, and now i will swallow, learn from and hopefully respond to the various comments. So long as negative points don’t prevent me from reading and posting, i could care less about what points i have—I also note that I was clear about my intentions about wanting to goad an answer.
Perhaps you disagree with my methods, but since my goal was to hear multiple perspectives, and got more than I usually do, i see its a win for instrumental rationality. And, if my follow-ups suggest I’m not an troll/general a**, perhaps I wont have to use dirty tricks going forward!
As a matter of policy, I always downvote any comment that includes anything like your final paragraph.
I see these wonky pseudo-threats around the site a lot and they’re really confusing to me. Of course I’m biased! I’m a human! Just because I’m hanging around this site doesn’t mean I’ve cleared all my biases and now become a perfect rational agent.
On one hand, I do want to weed out cognitive biases in situations where they’re hindering my decision-making in important areas of my life. On the other hand, I still have a lot of information to sift through in the real world, so maybe some of the shortcuts my brain uses are pretty handy to keep around. One example of these would be “talk to people that don’t threaten you; discourage people that do.” Sure, this might be filtering out some potentially good discussions, but it still seems like a pretty good heuristic to me, especially out there in scary meatspace. ^_^
FWIW, I mostly understand them to be attempts at manipulating listeners into avoiding a particular behavior by associating that behavior with a low-status condition, coupled with the belief that on LW being biased is seen as a low-status condition.
But hopefully most LWers don’t go around thinking being biased in any way is awful and they must be completely unbiased at all times. Right? So I’m not sure where the manipulative people pick up that idea. I definitely think we should minimize bias when making important decisions, but when deciding what posts to read/reply to? I will proudly use a shortcut!
This is not the only hypothesis that downvotes to this post or failures to respond provide evidence for. It also provides evidence, in the Bayesian sense, that people think you’re a troll, or that your writing is suboptimal, or that only a few people managed to see this post in the first place, or… etc.
Anyway, it is not entirely clear to me what you mean by “rationality,” but I’ll use a caricature of it, namely “use Bayes’ theorem and then do the thing that maximizes expected utility.”
One big problem is what your priors should be. Probably no human in the world actually uses Solomonoff induction (and it is still not entirely clear to me that this is a good idea), so whatever else they’re using is an opportunity for bias to creep in.
Another big problem is how you should actually use Bayes’ theorem in practice. Any given observation contains way more information in it than you can reasonably update on, so you need to make some modeling decisions and privilege certain kinds of information above others, then find some kind of reasonable procedure for estimating likelihood ratios, and these are all more opportunities for bias to creep in.
And a third big problem is how to actually compute utilities. Before you do this you need to address the question of whether humans even have utility functions, whether they should aspire to have utility functions (whatever “should” means here), and if so, what your utility function is…
These are all big problems. In response I would say that ideal decision-making is not a thing that we can do, but understanding more about what the ideal looks like can help us move our decision-making closer to ideal.
If by “Rationality, as a universal theory (or near-universal) on decision making” you mean using Bayes’ Theorem as a way of determining the likelihood of various potential events and consequently estimating the expected value of various courses of action, which is something that “rationality” sometimes gets used to mean on this site, I’d say (as many have said before me) that one big weakness is the lack of reliable priors. A mechanism for precisely calculating how much I should update P(x) from an existing made-up value based on things I don’t necessarily know doesn’t provide me with much guidance in my day-to-day life. Another big weakness is computational intractability.
If you mean more broadly making decisions based on the estimated expected value of various courses of action, I suppose the biggest weakness is again computational intractability. Which in turn potentially leads to sloppiness like making simplifying assumptions that are so radically at odds with my real environment that my estimates of value are just completely wrong.
If you mean something else, it might be useful if you said what you mean more precisely.
It’s worth noting explicitly that these weaknesses are not themselves legitimate grounds for choosing some other approach that shares the same weaknesses. For example, simply making shit up typically results in estimates of value which are even more wrong. But you explicitly asked about weaknesses in isolation, rather than reasons to pick one decision theory over another.
Thanks. I don’t mean any weaknesses in particular, the idea laid out by EY was to confront your greatest weaknesses, so that is something for those that follow the theory to look into—I’m just exploring :).
I guess what I’m not following is this idea of “choosing” an approach. Implicit in your answer I think is the idea that there is a “best” approach that must be discovered among the various theories on living life—why does the existence of theory that is the “best” indicative that it is universally applicable? The goal is to “understand reality,” not choose a methodology that is the “best” under the assumption that the “best” theory can be then be followed universally.
Put differently, to choose rationality as a universal theory notwithstanding its flaws, you’re saying more than “its the “best” of all the available theories—I think you must also believe that the idea of having a set theory to guide life, notwithstanding its flaws, is the best way to go about understanding reality. What is the basis for the belief in the second prong?
Saying “well i have to make a decision,” so i need to find the best theory doesn’t cut it. It is clear there are times we must make a decision, but you are left with a similar question—why are humans entitled to know what to do simply because they need to make a decision? Perhaps in “reality” is there is no answer (or no answer within the limits of human comprehension) -- it is true you’re stuck not knowing what to do but you surely have a better view of reality (if that is the reality).
The implications of this are important. If you agree that rational choice theory is the “best” of all theories, but also agree that there is (or may be) a distinction between “choosing/applying a set theory” and “understanding reality” to the greatest extent humanly possible, it suggests one would need more than rationality to truly understand reality.
No, I don’t think that’s implied. We do make decisions, and some processes for making decisions lead to different results than other processes, and some results are better than others. It doesn’t follow that there’s a single best approach, or that such an approach is discoverable, or that it’s worthwhile to search for it.
Is that the goal? I’m not sure it is.
As above, I neither agree that understanding reality is a singularly important terminal goal, nor that finding the “best theory” for achieving my goals is a particularly high-priority instrumental goal.
So, mostly, I feel like this entire comment is orthogonal to anything I actually said, and you’re putting a lot of words in my mouth here. You might do better to just articulate what you believe without trying to frame it as a reply to my comment.
I’m not sure what you mean. In such a case, rationality dictates that IFF you truly want to understand reality, you should find that “more” that is needed and use it instead of rationality. This is the rational course of action. Therefore it is rational to do that thing “instead of” doing rationality. Thus being rational means doing this thing that leads to understanding reality.
This seems to imply that if you keep recursively applying rationality to your own application of rationality, you end up finding that that which leads with highest probability to the desired goal is always rationality.
First off:
This is usually considered a very bad sign and to be against community norms and/or ethics. Many people would/will downvote your comment exclusively because of the quoted paragraph. My first impulse was to do so, but I’m overriding it in favor of this response and in light of the rest of your comment, which seems like a habit of reasoning to be strongly encouraged, regardless of other things I’ll get to in a minute.
So, first, before any productive discussion of this can be done (edit: from my end, at least), I have to be reasonably confident that you’ve read and understood “What Do We Mean By “Rationality”?”, which establishes as two separate functions what I believe you’re referring to when you say “Rationality as a (near-)universal theory on decision-making.”
Alright. Now, assuming you understand the point of that post and the content of “rationality”, could you help me pinpoint your exact question? To me, “How has Rationality confronted its most painful weaknesses?” and “What are rationality’s weak points?” are incoherent questions—they seem Mysterious—to the same extent that one could ask the same questions of thinking, of existence, of souls, of the Peano Axioms, or of basically anything that requires more context to properly compute those questions for.
If you’re trying to question the usefulness of the function “be instrumentally rational”, then the most salient weakness is that it is theoretically possible that a human could attempt to be instrumentally rational, end up applying it inexactly or inefficiently, waste time, not recurse to a high enough stack, or a slew of other mistakes.
The second most important is that sometimes, even a human properly applying the principles of instrumental rationality will find out that their values are more easily fulfilled by doing something else and not applying instrumental rationality—at which point, because they are applying instrumental rationality and the function “be instrumentally rational” is a polymorphic function, the next instrumentally rational thing to do is to not be instrumentally rational anymore, since it is what maximizes “winning”, which as described in the first link above is what instrumental rationality strives for. In this case, using instrumental rationality in the first place if you were already doing the other thing that maximizes value could be considered an opportunity-cost virus, since it consumed time and mental energy and possibly other resources in a quest to figure out that you shouldn’t have done this.
However, if you look at the odds using the tools at your disposal, it seems extremely unlikely that it would be the case that being rational is less efficient towards achieving values than other strategies, since optimizing for expected utility, over all possible strategies in all possible worlds, is mathematically the strategy most likely to achieve optimal utility. This sounds like a trivial theorem that follows from standard peano axioms, but I don’t recall seeing any example of this particular statement being formalized like that.
By simple probability axioms, it is even more unlikely that what you’re already doing is better than applying instrumental rationality and finding out the actual non-rational strategy that is optimal for your values, let alone compared against the expected utility of the probabilistic expectations of instrumental rationality itself being optimal versus the low probability of it leading to some other non-rational optimal strategy.
Basically, it seems like the only relevant weaknesses of applied instrumental rationality are: computational (in)tractability, unlikely chance that some non-expected-winning-maximizing strategy might actually be better for maximizing winning (which can’t be known reliably in advance anyway unless you happen to defy all probability and by hypothesis already contain the true knowledge of the true optimal strategy for the agent your mind implements), and some difficulties or risks during implementation by us humans as a result of bugs and inefficiencies in human hardware.
When this is applied in a meta manner, where you rationally attempt to choose which strategies instead of applying a naive version of rationality, such as many of the ways described in the Sequences on LessWrong, then as per bayesian updating and the tools available to us, this seems to be probabilistically the most effective possible strategy for human hardware. Which means that on a statistical level, the only weakness of instrumental rationality is that it’s hard to understand correctly, hard to actually implement, and hard to apply. The other responses to your comment have more details on many ways human hardware can fail to be optimal at this or have/cause various important problems.
Well, here are my assessments of rationalities weakest points, from what I have read on Less Wrong so far. (That means some of these use “Rationality” when “Less Wrong” may be better used, which could be a crippling flaw in my list of weaknesses.) It sounds like you may be looking for something like these:
1: Several aspects of rationality require what appears to be a significant amount of math and philosophy to get right. If you don’t understand that math or the philosophy, you aren’t really being a rationalist, you’re more just following the lead of other rationalists because they seem smart, or possibly because you liked their books, and possibly cheerleading them on on occasion. Rationality needs more simpler explanations that can be easily followed by people who are not in the top 1% of brain processing.
2: Rationality also requires quick explanations. Some people who study rationality realize this: When attempting to automate their decision theory, consider the problem “How do we make a computer that doesn’t turn the entire universe into more computer to be absolutely sure that 2+2 is 4?” is considered a substantial problem. Quick answers tend to be eschewed, for ever more levels of clarity which take an increasing large amount of time, and even when confronted with the obvious problem that going for nothing but clarity causes, Rationalists consider it to be something that requires even more research into how to be clear.
3: Rationality decision problems really don’t rise above the level of religion. Consider that in many rationality decision problems, the first thing that Rationalists do is presuppose “Omega” who is essentially “God” with the serial numbers filed off. Infinite (Or extremely high) utility and disutility are thrown around like so many parallels of Heaven and Hell. This makes a lot of rationality problems the kind of thing that those boring philosophers of the past (that Rationalists are so quick to eschew) have discussed ad nauseum.
It’s hard to really grasp the scope of these problems (I’m one of those people in part 1 that doesn’t quite get some of the mathier bits sometimes.) And I’m not sure any of them are fatal to rationality as a decision making method, since I still read the site and consider it trustworthy. But if you were going to look for weak points, you could start at any of these.
Does that help?
Leaving aside the rest of this comment, please note that in many cases we throw around large numbers and high probabilities in order to obviously break fragile systems that wouldn’t break as obviously if we threw small numbers and middle probabilities.
That makes intuitive sense to me, since I’ve worked in programming. Thanks!