Now, I agree with most of what you said here. However, some of it doesn’t quite parse for me, so here’s my attempt at resolving what seems like communication issues.
(...) but all I want to know is what the weakest [strongest?] arguments against rationality are (...)
This doesn’t really tell me anything about what you want to know, even assuming you mean “strongest arguments against rationality” and/or “weakest arguments for rationality”.
Arguments for something are usually coupled with a claim—they are arguments for a claim. Which specific claim are you referring to when you use the word “rationality” in the claim above? I’m not asking a trick question, I just can’t tell what you mean out of several hundreds of thousands of possible things you could possibly be thinking about. Sometimes, it could also be for or against a specific technique, where it is implied that the claim is “you should use this technique”.
To me, the phrase “arguments for and against rationality” makes as much sense as the phrase “arguments for and against art” or the phrase “arguments for and against numbers”. There’s some missing element, some missing piece of context that isn’t obvious to me and that wasn’t mentioned explicitly.
Here are some attempts at guessing what you could mean, just as an exercise for me and as points of comparison for you:
“What are the strongest arguments against using bayesian updating to form accurate models of the world?” (i.e. The strongest arguments against the implied claim that you should use bayesian updating when you want to form accurate models of the world—this is the standard pattern.)
“What are the strongest arguments against the claim that forming accurate models of the world is useful towards achieving your goals?”
“What are the strongest arguments against the claim that forming accurate models of the world is useful to me?”
“What are the strongest arguments against the use of evidence to decide on which beliefs to believe?”
“What are the strongest arguments against the usefulness or accuracy of probabilities in general as opposed to human intuition?”
“What are the strongest arguments against the claim that humans have anything resembling a utility function, desires, or values?”
“What are the strongest arguments that choosing the action with highest expected utility is not the best (most optimal) way to achieve human values?”
“What are the strongest arguments against the claim that calculating expected utility is not (always) a waste of time?”
“What are the strongest arguments against the claim that anything can even be truly known or understood by humans?”
“What are the strongest arguments that if nothing can be truly known, it is meaningless to attempt to be less wrong?”
“What are the strongest arguments against the best way to achieve a goal being the best way to achieve that goal?” (yes, I know exactly how this looks/sounds)
“On LW rationality is sometimes referred to as ‘winning’. What is the evidence against the claim that humans want to win in the first place?”
“What are the strongest arguments against the idea that human values make any sense and can ever be approximated, let alone known?”
“What are the strongest arguments against the claim that taking actions will limit the possible future states of the world?”
“What are the strongest arguments against the claim that limiting the possible future states of the world can help achieve your goals and fulfill your values?”
“What are the strongest arguments against humans being able to limit possible future states of the world to the right future possible states that will achieve their goals?”
Feel free to pick any of the above reductions (more than one if need be) as a starting point for further analysis and information exchange, or preferably form your own more precise question by comparing your internal question to the above. Hopefully this’ll help clarify exactly what you’re asking us.
DeFranker—many thanks for taking the time, very helpful.
I spent last night thinking about this, and now I understand your (LW’s) points better and my own. To start, I think the ideas of epistemic rationality and instrumental rationality are unassailable as ideas—there are few things that make as much sense as the ideas of what rationality is trying to do, in the abstract.
But, when we say “rationality” is a good idea, I want to understand two fundamental things: In what context does rationality apply, and where it applies, what methodologies, if any, apply to actually practice it. I don’t presuppose any answers to the above—at the same time I don’t want to “practice rationality” unless or before i understand how those two questions are answered or dealt with (I appreciate its not your responsibility to answer them, I’m just expressing them as things I’m considering).
“Weaknesses” of rationality is not an appropriate question—I now understand the visceral reaction—However, by putting rationality in context, one can better understand its usefulness from a practical perspective. Any lack of usefulness, or lack of applicability would be the “weakness/criticism” I was asking about, but upon reflection, I get to the same place by talking about context.
Let me step back a bit to explain why I think these questions are relevant. We all know the phrase “context matters” in the abstract—I would argue that epistemic rationality, in the abstract, is relevant for instrumental rationality because if our model of the world is incorrect, the manner in which we choose to reach our goals in that world will be affected. All I’m really saying here is that “context matters.” Now while most agree that context matters with respect to decision making, there’s an open question as to “what context actually matters. So, there is always a potential debate regarding whether the the world is understood well enough and to the extent necessary in order to successfully practice instrumental rationality—this is clearly a relative/subjective determination.
With that in mind, any attempt to apply instrumental rationality would require some thought about epistemic rationality, and whether my map is sufficient to make a decision. Does rationality, as it is currently practice, offer any guidance on this? Lets pretend the answer is no—that’s fine, but then that’s a potential “flaw” in rationality or hole where rationality alone does not help with an open issue/question that is relevant.
I’m not trying to knock rationality, but I’m not willing to coddle it and pretend its all there is to know if it comes at the cost of minimizing knowledge.
Now, I agree with most of what you said here. However, some of it doesn’t quite parse for me, so here’s my attempt at resolving what seems like communication issues.
This doesn’t really tell me anything about what you want to know, even assuming you mean “strongest arguments against rationality” and/or “weakest arguments for rationality”.
Arguments for something are usually coupled with a claim—they are arguments for a claim. Which specific claim are you referring to when you use the word “rationality” in the claim above? I’m not asking a trick question, I just can’t tell what you mean out of several hundreds of thousands of possible things you could possibly be thinking about. Sometimes, it could also be for or against a specific technique, where it is implied that the claim is “you should use this technique”.
To me, the phrase “arguments for and against rationality” makes as much sense as the phrase “arguments for and against art” or the phrase “arguments for and against numbers”. There’s some missing element, some missing piece of context that isn’t obvious to me and that wasn’t mentioned explicitly.
Here are some attempts at guessing what you could mean, just as an exercise for me and as points of comparison for you:
“What are the strongest arguments against using bayesian updating to form accurate models of the world?” (i.e. The strongest arguments against the implied claim that you should use bayesian updating when you want to form accurate models of the world—this is the standard pattern.)
“What are the strongest arguments against the claim that forming accurate models of the world is useful towards achieving your goals?”
“What are the strongest arguments against the claim that forming accurate models of the world is useful to me?”
“What are the strongest arguments against the use of evidence to decide on which beliefs to believe?”
“What are the strongest arguments against the usefulness or accuracy of probabilities in general as opposed to human intuition?”
“What are the strongest arguments against the claim that humans have anything resembling a utility function, desires, or values?”
“What are the strongest arguments that choosing the action with highest expected utility is not the best (most optimal) way to achieve human values?”
“What are the strongest arguments against the claim that calculating expected utility is not (always) a waste of time?”
“What are the strongest arguments against the claim that anything can even be truly known or understood by humans?”
“What are the strongest arguments that if nothing can be truly known, it is meaningless to attempt to be less wrong?”
“What are the strongest arguments against the best way to achieve a goal being the best way to achieve that goal?” (yes, I know exactly how this looks/sounds)
“On LW rationality is sometimes referred to as ‘winning’. What is the evidence against the claim that humans want to win in the first place?”
“What are the strongest arguments against the idea that human values make any sense and can ever be approximated, let alone known?”
“What are the strongest arguments against the claim that taking actions will limit the possible future states of the world?”
“What are the strongest arguments against the claim that limiting the possible future states of the world can help achieve your goals and fulfill your values?”
“What are the strongest arguments against humans being able to limit possible future states of the world to the right future possible states that will achieve their goals?”
Feel free to pick any of the above reductions (more than one if need be) as a starting point for further analysis and information exchange, or preferably form your own more precise question by comparing your internal question to the above. Hopefully this’ll help clarify exactly what you’re asking us.
DeFranker—many thanks for taking the time, very helpful.
I spent last night thinking about this, and now I understand your (LW’s) points better and my own. To start, I think the ideas of epistemic rationality and instrumental rationality are unassailable as ideas—there are few things that make as much sense as the ideas of what rationality is trying to do, in the abstract.
But, when we say “rationality” is a good idea, I want to understand two fundamental things: In what context does rationality apply, and where it applies, what methodologies, if any, apply to actually practice it. I don’t presuppose any answers to the above—at the same time I don’t want to “practice rationality” unless or before i understand how those two questions are answered or dealt with (I appreciate its not your responsibility to answer them, I’m just expressing them as things I’m considering).
“Weaknesses” of rationality is not an appropriate question—I now understand the visceral reaction—However, by putting rationality in context, one can better understand its usefulness from a practical perspective. Any lack of usefulness, or lack of applicability would be the “weakness/criticism” I was asking about, but upon reflection, I get to the same place by talking about context.
Let me step back a bit to explain why I think these questions are relevant. We all know the phrase “context matters” in the abstract—I would argue that epistemic rationality, in the abstract, is relevant for instrumental rationality because if our model of the world is incorrect, the manner in which we choose to reach our goals in that world will be affected. All I’m really saying here is that “context matters.” Now while most agree that context matters with respect to decision making, there’s an open question as to “what context actually matters. So, there is always a potential debate regarding whether the the world is understood well enough and to the extent necessary in order to successfully practice instrumental rationality—this is clearly a relative/subjective determination.
With that in mind, any attempt to apply instrumental rationality would require some thought about epistemic rationality, and whether my map is sufficient to make a decision. Does rationality, as it is currently practice, offer any guidance on this? Lets pretend the answer is no—that’s fine, but then that’s a potential “flaw” in rationality or hole where rationality alone does not help with an open issue/question that is relevant.
I’m not trying to knock rationality, but I’m not willing to coddle it and pretend its all there is to know if it comes at the cost of minimizing knowledge.