This is a response to theOtherDave—I can’t respond anymore to threads! you guys win! crush dissent based on superficial factors that “automatically result in downvotes” and thus ignore criticism! fool proof!
Is that [understanding reality] the goal? I’m not sure it is
As above, I neither agree that understanding reality is a singularly important terminal goal, nor that finding the “best theory” for achieving my goals is a particularly high-priority instrumental goal.
ok, sorry to put words in your mouth—what is your goal then? Is it not fair to say the goal is “understand reality” and “achieve your goals”? I’m ignoring the second because its personal—the first goes to a normative understanding of reality, which presumably equally apply to each of us.
perhaps your definition is different, but my understanding is that epistemic rationality is focused on understanding reality, and it uses rational choice theory as a means to understand that reality.
(This comment is entirely about the meta-subject and your approach to this discussion, and doesn’t engage with your dialogue with TheOtherDave.)
I can’t respond anymore to threads! you guys win! crush dissent based on superficial factors that “automatically result in downvotes” and thus ignore criticism! fool proof!
This is, in local parlance, called a Fully General Counterargument. It does not engage with the arguments we present at all, does not present any evidence that its claim might be true, but applies optimized sophistry to convince an audience that its claim is true and the alternatives untrue.
The response blocker is an anti-troll functionality, and does more good than harm to the epistemic hygiene of the community (as far as I can tell).
Dissent is not crushed—if the community norms are respected, even very contrarian arguments can be massively upvoted. However, this usually requires more research, evidence and justification than non-contrarian arguments, because according to the knowledge we have an opinion that disagrees with us starts with a lower credibility prior, and this prior needs more evidence to be brought up to the same level of credibility as other arguments that the community is neutral or positive about.
We¹ understand that it can be frustrating to someone who really wants to discuss and is interested to be blocked off like this, but this also seems to double-time as a filter for new users. New users that cannot muster the patience to deal with this issue are very unlikely to be mature and respectful enough to participate productively on LessWrong, since many of the relevant behaviors do correlate.
The best way “around” the block that prevents you from responding to comments is to PM users directly, and if something you want to say is of public interest it is usually recommended to ask a more neutral participant of the discussion or someone you believe will represent and transmit your message well to post what you have to say for you. Some users have even experimented a bit with this in the past and shown that changing the username that posts something does change the way even LW users will read and interpret the content (there are many reasons why this is not always a bad thing).
Overall, when you want to criticize LW ideas, we expect you to have thought about it a reasonably large amount of time (proportionally to how much others on LW have already thought about it), we expect some evidence to be presented because if most LWers don’t believe the claim this is bayesian evidence that it is not worth believing, and we expect you to use terms and concepts that are close to the ones we use or present evidence that the words and concepts we use for something are not adequate and you have more appropriate suggestions.
However, as it is, your criticism doesn’t seem to offer any evidence-based claims, your questions seem poorly defined and tainted with confusion, your attitude is providing strong evidence that you are not willing to update to evidence or engage in any sort of rational and useful discourse, and I had great difficulty writing my previous response because I was attempting to meet you as close as possible to your concepts and terminology rather than start from the LessWrong common ground and local jargon, since it seemed unlikely that simply phrasing it in my own standard words would have fared any better than what I assume you’ve already read.
For as much of LessWrong as I can speak for, which is probably not much—I’m a relatively recent user and I have made no major contributions that I’m aware of. This applies to each time I use “we” in this comment.
DeFranker, thanks for the detailed note—I take your points, they are reasonable and fair, but want to share a different perspective.
The problem I’m having is that I’m not actually making any arguments as “correct” or saying any of you people are wrong. The observation/statement for the sake of discussion does not mean that there is a conclusory judgment attached to it. Now, to the extent that you say i need to have a better understanding to make dissenting points, fair, but all I want to know is what the weakest arguments against rationality are, and question what relevance those weaknesses, if any, on the determination about the amount of time and energy to be spent on rational choice theory, as opposed to another theory or no theory. This seems particularly appropriate with respect to THIS article—which asks that believers of a theory question the weakest positions of that theory—whether in application or whatever. This is an analysis for believers to perform. Again, I’m not saying you don’t have any strong arguments to weaker positions or that you even have weak positions—I’m asking how those that follow rationality have approached this question/issue and how they’ve disposed of it.
It would seem those that follow a theory have the greatest responsibility to consider the strongest arguments against that very theory (which is exactly why EY posted the article re: Judaism). Why is it so inappropriate to hold rationality to the same standard? I’m not presupposing an answer, I just want to know YOUR answer is so i better understand your point of view. Perhaps your answer is “its obvious this theory is correct,” without more. I would be fine with that simply because you’ve answered the question—you’ve given me your perspective. Sure, I may ask additional questions, but the goal is not to be right or win some online war, the goal is to learn (my effing name is “non-expert”—you dont’ have to worry about me telling you that you’re wrong, but i may question your logic/reason/etc.) I cannot learn unless I understand the perspectives of those that disagree with me.
And regarding the quoted text—yes, while i appreciate i did not follow the “culture” or norms of this site, I had looked at this site as a place for substantive answers/discussions. I’m not making a fully general counterargument—I’m simply pointing out that attacking my jokes/jabs allows you to avoid my question—again, to be clear, I didn’t ask the question to prove you’re wrong, I’m asking the question to hear your answer!
Now, I agree with most of what you said here. However, some of it doesn’t quite parse for me, so here’s my attempt at resolving what seems like communication issues.
(...) but all I want to know is what the weakest [strongest?] arguments against rationality are (...)
This doesn’t really tell me anything about what you want to know, even assuming you mean “strongest arguments against rationality” and/or “weakest arguments for rationality”.
Arguments for something are usually coupled with a claim—they are arguments for a claim. Which specific claim are you referring to when you use the word “rationality” in the claim above? I’m not asking a trick question, I just can’t tell what you mean out of several hundreds of thousands of possible things you could possibly be thinking about. Sometimes, it could also be for or against a specific technique, where it is implied that the claim is “you should use this technique”.
To me, the phrase “arguments for and against rationality” makes as much sense as the phrase “arguments for and against art” or the phrase “arguments for and against numbers”. There’s some missing element, some missing piece of context that isn’t obvious to me and that wasn’t mentioned explicitly.
Here are some attempts at guessing what you could mean, just as an exercise for me and as points of comparison for you:
“What are the strongest arguments against using bayesian updating to form accurate models of the world?” (i.e. The strongest arguments against the implied claim that you should use bayesian updating when you want to form accurate models of the world—this is the standard pattern.)
“What are the strongest arguments against the claim that forming accurate models of the world is useful towards achieving your goals?”
“What are the strongest arguments against the claim that forming accurate models of the world is useful to me?”
“What are the strongest arguments against the use of evidence to decide on which beliefs to believe?”
“What are the strongest arguments against the usefulness or accuracy of probabilities in general as opposed to human intuition?”
“What are the strongest arguments against the claim that humans have anything resembling a utility function, desires, or values?”
“What are the strongest arguments that choosing the action with highest expected utility is not the best (most optimal) way to achieve human values?”
“What are the strongest arguments against the claim that calculating expected utility is not (always) a waste of time?”
“What are the strongest arguments against the claim that anything can even be truly known or understood by humans?”
“What are the strongest arguments that if nothing can be truly known, it is meaningless to attempt to be less wrong?”
“What are the strongest arguments against the best way to achieve a goal being the best way to achieve that goal?” (yes, I know exactly how this looks/sounds)
“On LW rationality is sometimes referred to as ‘winning’. What is the evidence against the claim that humans want to win in the first place?”
“What are the strongest arguments against the idea that human values make any sense and can ever be approximated, let alone known?”
“What are the strongest arguments against the claim that taking actions will limit the possible future states of the world?”
“What are the strongest arguments against the claim that limiting the possible future states of the world can help achieve your goals and fulfill your values?”
“What are the strongest arguments against humans being able to limit possible future states of the world to the right future possible states that will achieve their goals?”
Feel free to pick any of the above reductions (more than one if need be) as a starting point for further analysis and information exchange, or preferably form your own more precise question by comparing your internal question to the above. Hopefully this’ll help clarify exactly what you’re asking us.
DeFranker—many thanks for taking the time, very helpful.
I spent last night thinking about this, and now I understand your (LW’s) points better and my own. To start, I think the ideas of epistemic rationality and instrumental rationality are unassailable as ideas—there are few things that make as much sense as the ideas of what rationality is trying to do, in the abstract.
But, when we say “rationality” is a good idea, I want to understand two fundamental things: In what context does rationality apply, and where it applies, what methodologies, if any, apply to actually practice it. I don’t presuppose any answers to the above—at the same time I don’t want to “practice rationality” unless or before i understand how those two questions are answered or dealt with (I appreciate its not your responsibility to answer them, I’m just expressing them as things I’m considering).
“Weaknesses” of rationality is not an appropriate question—I now understand the visceral reaction—However, by putting rationality in context, one can better understand its usefulness from a practical perspective. Any lack of usefulness, or lack of applicability would be the “weakness/criticism” I was asking about, but upon reflection, I get to the same place by talking about context.
Let me step back a bit to explain why I think these questions are relevant. We all know the phrase “context matters” in the abstract—I would argue that epistemic rationality, in the abstract, is relevant for instrumental rationality because if our model of the world is incorrect, the manner in which we choose to reach our goals in that world will be affected. All I’m really saying here is that “context matters.” Now while most agree that context matters with respect to decision making, there’s an open question as to “what context actually matters. So, there is always a potential debate regarding whether the the world is understood well enough and to the extent necessary in order to successfully practice instrumental rationality—this is clearly a relative/subjective determination.
With that in mind, any attempt to apply instrumental rationality would require some thought about epistemic rationality, and whether my map is sufficient to make a decision. Does rationality, as it is currently practice, offer any guidance on this? Lets pretend the answer is no—that’s fine, but then that’s a potential “flaw” in rationality or hole where rationality alone does not help with an open issue/question that is relevant.
I’m not trying to knock rationality, but I’m not willing to coddle it and pretend its all there is to know if it comes at the cost of minimizing knowledge.
If you want to start a discussion about the weaknesses of rationality based on the assumption that understanding reality is the correct thing to value, I recommend you just do that.
Asking me what my goals are in the context of insisting that my goals ought to be to understand reality, just confuses the issue. Coupled with your insistence that you’re just asking questions and all this talk about winning and crushing dissent and whatnot, the impression I’m left with is that you’re primarily interested in winning an argument, and not being entirely honest about your motives.
no—im not saying your goals ought to be anything, and i’m not trying to win an argument, but appreciate you will interpret my motives as you see appropriate.
let me try this differently—there is an idea on LW that rationality is a “good” way to go about thinking [NOTE: correct me if i’m wrong]. By rationality, I mean exactly what is listed here:
Epistemic rationality: believing, and updating on evidence, so as to systematically improve the correspondence between your map and the territory. The art of obtaining beliefs that correspond to reality as closely as possible. This correspondence is commonly termed “truth” or “accuracy”, and we’re happy to call it that.
Instrumental rationality: achieving your values. Not necessarily “your values” in the sense of being selfish values or unshared values: “your values” means anything you care about. The art of choosing actions that steer the future toward outcomes ranked higher in your preferences. On LW we sometimes refer to this as “winning”.
My question relates to putting these two ideas/points into context, but with more of a focus on epistemic rationality (because it seems you need to know the world (i.e. context) in which you’re making decisions before you apply instrumental rationality) -- is epistemic rationality practiced through a methodology? (probability theory/decision theory/something else?) or is the description above just an idea that is to be applied generically, e.g. just taking into account cognitive biases? If its just a description of an idea, then does that mean you cannot really “apply” it, you more just try to keep the general tenets in mind when thinking about things?
if theres a methodology (or multiple) to be used to practice epistemic rationality, does that methodology(ies) apply to help understand all aspects of “reality” (again, keying off EY’s definition)? [NOTE: It seems reality, if it could be understood, would mean the broadest understanding of who we are, why we are here, and how our world works day-to-day. Is LW using a different definition of reality?] If more than one methodology could apply depending on the situation, how do you distinguish between those methodologies?
If the “chosen” methodology(ies) for epistemic rationality is NOT appropriate for certain decisions, what alternatives are to be used? Also, how do you describe the distinction between the decisions for which the chosen methodology(ies) works and those decisions for which it does not?
To be clear, I’m asking to get context for how rationality fits within the larger picture of the universe, including all of its uncertainty. I realize you may not have answers to all these questions and that there may not be consensus about any of it—thats more than fine since all i’m looking for is responses, i don’t care what they actually are. for example, you or others may make certain assumptions for certain of the questions to make necessary simplifications/etc. - all of that is fine, I just think the questions need to be considered before you can credibly apply (or seek to apply) rationality, and want to see if you’ve thought about them and if so, how you’ve handled them. If I’m being unreasonable or missing something with my questions, so be it, but i’d be interested in your thoughts.
A lot depends on how broad a brush I understand the word “methodology” to cover, but if I’m correctly understanding what you mean by the term, no, there’s no particular methodology for how to practice epistemic rationality; it’s more like what you refer to as “trying to keep the general tenets in mind while thinking about things”.
That said, I suppose there are common practices you could call endorsed methodologies if you were in the mood.
For example, attaching confidence intervals to estimates and predictions is a practice you’ll see a lot around here, with the implied (though not formalized) associated practice of comparing those estimates/predictions with later measurements, and treating an underconfident accurate prediction as a failure of prediction (that is, an event that ought to trigger recalibration).
Great, thanks, this is helpful. Is the answer to the above questions, as far as you practice rationality, the same for instrumental rationality? it is an idea—but no real methodology? in my mind it would seem decision theory could be a methodology by which someone could practice instrumental rationality. To the extent it is, the above questions remain relevant (only in the sense they should be considered,
I now have an appreciation of your point—I can definitely see how the question “what are the flaws with epistemic rationality” could be viewed as an meaningless question—I was thinking about epistemic rationality as more than just an idea—an idea WITH a methodology. Clearly the idea is unassailable (in my mind anyway), but methodologies (whether for rationality or some other purpose) could at least in concept have flaws, or perhaps flaws in that they cannot be applied universally—it was this that I was asking about.
Interestingly, your response raises a different question. If epistemic rationality is an idea, and not a methodology, rationality (as it is discussed here) leaves open the possibility that there could be a methodology that may apply/help with practicing epistemic rationality (i.e. consistent with the IDEA of rationality, but a methodology by which you can practice it).
As I think most appreciate, ideas ( not necessarily with respect to rationality, but generally) suffer from the fact that they are general, and don’t give a user a sense of “what to do”—obviously, getting your map to match reality is not an easy task, so methodologies for epistemic rationality in the abstract could be helpful so as to put the idea to practice.
This is particularly important if you’re practicing instrumental rationality—This type of rationality is practiced “in the world,” so having an accurate (or accurate enough) model is seemingly important to ensure that the manner in which you practice instrumental rationality makes sense.
Thus, a possible shortcoming of instrumental rationality could be that it depends on epistemic rationality, but because there isn’t a clear answer to the question of “what is real,” instrumental rationality is limited to the extent our beliefs regarding “what is real” are actually correct. You could say that instrumental rationality, depending on the circumstances, does not require a COMPLETE understanding of the world, and so my observation, even if fair, must be applied on a sliding scale.
Agreed that it’s a lot easier to talk about flaws in specific methodologies than flaws in broad goals.
Agreed that a decision theory is a methodology by which someone could practice instrumental rationality, and there’s a fair amount of talk around here about what kinds of decision theories are best in what kinds of scenarios. Most of it goes over my head; I don’t really know what it would mean to apply the different decision theories that get talked about here to real-world situations.
Agreed that there could be a methodology that may apply/help with practicing epistemic rationality. Or many of them.
Agreed that in the absence of complete information about the world, our ability to maximize expected value will always be constrained, and that this is a shortcoming of instrumental rationality viewed in isolation. (Not so much when compared to alternatives, since all the alternatives have the same shortcoming.)
This is a response to theOtherDave—I can’t respond anymore to threads! you guys win! crush dissent based on superficial factors that “automatically result in downvotes” and thus ignore criticism! fool proof!
ok, sorry to put words in your mouth—what is your goal then? Is it not fair to say the goal is “understand reality” and “achieve your goals”? I’m ignoring the second because its personal—the first goes to a normative understanding of reality, which presumably equally apply to each of us.
perhaps your definition is different, but my understanding is that epistemic rationality is focused on understanding reality, and it uses rational choice theory as a means to understand that reality.
(This comment is entirely about the meta-subject and your approach to this discussion, and doesn’t engage with your dialogue with TheOtherDave.)
This is, in local parlance, called a Fully General Counterargument. It does not engage with the arguments we present at all, does not present any evidence that its claim might be true, but applies optimized sophistry to convince an audience that its claim is true and the alternatives untrue.
The response blocker is an anti-troll functionality, and does more good than harm to the epistemic hygiene of the community (as far as I can tell).
Dissent is not crushed—if the community norms are respected, even very contrarian arguments can be massively upvoted. However, this usually requires more research, evidence and justification than non-contrarian arguments, because according to the knowledge we have an opinion that disagrees with us starts with a lower credibility prior, and this prior needs more evidence to be brought up to the same level of credibility as other arguments that the community is neutral or positive about.
We¹ understand that it can be frustrating to someone who really wants to discuss and is interested to be blocked off like this, but this also seems to double-time as a filter for new users. New users that cannot muster the patience to deal with this issue are very unlikely to be mature and respectful enough to participate productively on LessWrong, since many of the relevant behaviors do correlate.
The best way “around” the block that prevents you from responding to comments is to PM users directly, and if something you want to say is of public interest it is usually recommended to ask a more neutral participant of the discussion or someone you believe will represent and transmit your message well to post what you have to say for you. Some users have even experimented a bit with this in the past and shown that changing the username that posts something does change the way even LW users will read and interpret the content (there are many reasons why this is not always a bad thing).
Overall, when you want to criticize LW ideas, we expect you to have thought about it a reasonably large amount of time (proportionally to how much others on LW have already thought about it), we expect some evidence to be presented because if most LWers don’t believe the claim this is bayesian evidence that it is not worth believing, and we expect you to use terms and concepts that are close to the ones we use or present evidence that the words and concepts we use for something are not adequate and you have more appropriate suggestions.
However, as it is, your criticism doesn’t seem to offer any evidence-based claims, your questions seem poorly defined and tainted with confusion, your attitude is providing strong evidence that you are not willing to update to evidence or engage in any sort of rational and useful discourse, and I had great difficulty writing my previous response because I was attempting to meet you as close as possible to your concepts and terminology rather than start from the LessWrong common ground and local jargon, since it seemed unlikely that simply phrasing it in my own standard words would have fared any better than what I assume you’ve already read.
For as much of LessWrong as I can speak for, which is probably not much—I’m a relatively recent user and I have made no major contributions that I’m aware of. This applies to each time I use “we” in this comment.
DeFranker, thanks for the detailed note—I take your points, they are reasonable and fair, but want to share a different perspective.
The problem I’m having is that I’m not actually making any arguments as “correct” or saying any of you people are wrong. The observation/statement for the sake of discussion does not mean that there is a conclusory judgment attached to it. Now, to the extent that you say i need to have a better understanding to make dissenting points, fair, but all I want to know is what the weakest arguments against rationality are, and question what relevance those weaknesses, if any, on the determination about the amount of time and energy to be spent on rational choice theory, as opposed to another theory or no theory. This seems particularly appropriate with respect to THIS article—which asks that believers of a theory question the weakest positions of that theory—whether in application or whatever. This is an analysis for believers to perform. Again, I’m not saying you don’t have any strong arguments to weaker positions or that you even have weak positions—I’m asking how those that follow rationality have approached this question/issue and how they’ve disposed of it.
It would seem those that follow a theory have the greatest responsibility to consider the strongest arguments against that very theory (which is exactly why EY posted the article re: Judaism). Why is it so inappropriate to hold rationality to the same standard? I’m not presupposing an answer, I just want to know YOUR answer is so i better understand your point of view. Perhaps your answer is “its obvious this theory is correct,” without more. I would be fine with that simply because you’ve answered the question—you’ve given me your perspective. Sure, I may ask additional questions, but the goal is not to be right or win some online war, the goal is to learn (my effing name is “non-expert”—you dont’ have to worry about me telling you that you’re wrong, but i may question your logic/reason/etc.) I cannot learn unless I understand the perspectives of those that disagree with me.
And regarding the quoted text—yes, while i appreciate i did not follow the “culture” or norms of this site, I had looked at this site as a place for substantive answers/discussions. I’m not making a fully general counterargument—I’m simply pointing out that attacking my jokes/jabs allows you to avoid my question—again, to be clear, I didn’t ask the question to prove you’re wrong, I’m asking the question to hear your answer!
Now, I agree with most of what you said here. However, some of it doesn’t quite parse for me, so here’s my attempt at resolving what seems like communication issues.
This doesn’t really tell me anything about what you want to know, even assuming you mean “strongest arguments against rationality” and/or “weakest arguments for rationality”.
Arguments for something are usually coupled with a claim—they are arguments for a claim. Which specific claim are you referring to when you use the word “rationality” in the claim above? I’m not asking a trick question, I just can’t tell what you mean out of several hundreds of thousands of possible things you could possibly be thinking about. Sometimes, it could also be for or against a specific technique, where it is implied that the claim is “you should use this technique”.
To me, the phrase “arguments for and against rationality” makes as much sense as the phrase “arguments for and against art” or the phrase “arguments for and against numbers”. There’s some missing element, some missing piece of context that isn’t obvious to me and that wasn’t mentioned explicitly.
Here are some attempts at guessing what you could mean, just as an exercise for me and as points of comparison for you:
“What are the strongest arguments against using bayesian updating to form accurate models of the world?” (i.e. The strongest arguments against the implied claim that you should use bayesian updating when you want to form accurate models of the world—this is the standard pattern.)
“What are the strongest arguments against the claim that forming accurate models of the world is useful towards achieving your goals?”
“What are the strongest arguments against the claim that forming accurate models of the world is useful to me?”
“What are the strongest arguments against the use of evidence to decide on which beliefs to believe?”
“What are the strongest arguments against the usefulness or accuracy of probabilities in general as opposed to human intuition?”
“What are the strongest arguments against the claim that humans have anything resembling a utility function, desires, or values?”
“What are the strongest arguments that choosing the action with highest expected utility is not the best (most optimal) way to achieve human values?”
“What are the strongest arguments against the claim that calculating expected utility is not (always) a waste of time?”
“What are the strongest arguments against the claim that anything can even be truly known or understood by humans?”
“What are the strongest arguments that if nothing can be truly known, it is meaningless to attempt to be less wrong?”
“What are the strongest arguments against the best way to achieve a goal being the best way to achieve that goal?” (yes, I know exactly how this looks/sounds)
“On LW rationality is sometimes referred to as ‘winning’. What is the evidence against the claim that humans want to win in the first place?”
“What are the strongest arguments against the idea that human values make any sense and can ever be approximated, let alone known?”
“What are the strongest arguments against the claim that taking actions will limit the possible future states of the world?”
“What are the strongest arguments against the claim that limiting the possible future states of the world can help achieve your goals and fulfill your values?”
“What are the strongest arguments against humans being able to limit possible future states of the world to the right future possible states that will achieve their goals?”
Feel free to pick any of the above reductions (more than one if need be) as a starting point for further analysis and information exchange, or preferably form your own more precise question by comparing your internal question to the above. Hopefully this’ll help clarify exactly what you’re asking us.
DeFranker—many thanks for taking the time, very helpful.
I spent last night thinking about this, and now I understand your (LW’s) points better and my own. To start, I think the ideas of epistemic rationality and instrumental rationality are unassailable as ideas—there are few things that make as much sense as the ideas of what rationality is trying to do, in the abstract.
But, when we say “rationality” is a good idea, I want to understand two fundamental things: In what context does rationality apply, and where it applies, what methodologies, if any, apply to actually practice it. I don’t presuppose any answers to the above—at the same time I don’t want to “practice rationality” unless or before i understand how those two questions are answered or dealt with (I appreciate its not your responsibility to answer them, I’m just expressing them as things I’m considering).
“Weaknesses” of rationality is not an appropriate question—I now understand the visceral reaction—However, by putting rationality in context, one can better understand its usefulness from a practical perspective. Any lack of usefulness, or lack of applicability would be the “weakness/criticism” I was asking about, but upon reflection, I get to the same place by talking about context.
Let me step back a bit to explain why I think these questions are relevant. We all know the phrase “context matters” in the abstract—I would argue that epistemic rationality, in the abstract, is relevant for instrumental rationality because if our model of the world is incorrect, the manner in which we choose to reach our goals in that world will be affected. All I’m really saying here is that “context matters.” Now while most agree that context matters with respect to decision making, there’s an open question as to “what context actually matters. So, there is always a potential debate regarding whether the the world is understood well enough and to the extent necessary in order to successfully practice instrumental rationality—this is clearly a relative/subjective determination.
With that in mind, any attempt to apply instrumental rationality would require some thought about epistemic rationality, and whether my map is sufficient to make a decision. Does rationality, as it is currently practice, offer any guidance on this? Lets pretend the answer is no—that’s fine, but then that’s a potential “flaw” in rationality or hole where rationality alone does not help with an open issue/question that is relevant.
I’m not trying to knock rationality, but I’m not willing to coddle it and pretend its all there is to know if it comes at the cost of minimizing knowledge.
If you want to start a discussion about the weaknesses of rationality based on the assumption that understanding reality is the correct thing to value, I recommend you just do that.
Asking me what my goals are in the context of insisting that my goals ought to be to understand reality, just confuses the issue. Coupled with your insistence that you’re just asking questions and all this talk about winning and crushing dissent and whatnot, the impression I’m left with is that you’re primarily interested in winning an argument, and not being entirely honest about your motives.
no—im not saying your goals ought to be anything, and i’m not trying to win an argument, but appreciate you will interpret my motives as you see appropriate.
let me try this differently—there is an idea on LW that rationality is a “good” way to go about thinking [NOTE: correct me if i’m wrong]. By rationality, I mean exactly what is listed here:
My question relates to putting these two ideas/points into context, but with more of a focus on epistemic rationality (because it seems you need to know the world (i.e. context) in which you’re making decisions before you apply instrumental rationality) -- is epistemic rationality practiced through a methodology? (probability theory/decision theory/something else?) or is the description above just an idea that is to be applied generically, e.g. just taking into account cognitive biases? If its just a description of an idea, then does that mean you cannot really “apply” it, you more just try to keep the general tenets in mind when thinking about things?
if theres a methodology (or multiple) to be used to practice epistemic rationality, does that methodology(ies) apply to help understand all aspects of “reality” (again, keying off EY’s definition)? [NOTE: It seems reality, if it could be understood, would mean the broadest understanding of who we are, why we are here, and how our world works day-to-day. Is LW using a different definition of reality?] If more than one methodology could apply depending on the situation, how do you distinguish between those methodologies?
If the “chosen” methodology(ies) for epistemic rationality is NOT appropriate for certain decisions, what alternatives are to be used? Also, how do you describe the distinction between the decisions for which the chosen methodology(ies) works and those decisions for which it does not?
To be clear, I’m asking to get context for how rationality fits within the larger picture of the universe, including all of its uncertainty. I realize you may not have answers to all these questions and that there may not be consensus about any of it—thats more than fine since all i’m looking for is responses, i don’t care what they actually are. for example, you or others may make certain assumptions for certain of the questions to make necessary simplifications/etc. - all of that is fine, I just think the questions need to be considered before you can credibly apply (or seek to apply) rationality, and want to see if you’ve thought about them and if so, how you’ve handled them. If I’m being unreasonable or missing something with my questions, so be it, but i’d be interested in your thoughts.
A lot depends on how broad a brush I understand the word “methodology” to cover, but if I’m correctly understanding what you mean by the term, no, there’s no particular methodology for how to practice epistemic rationality; it’s more like what you refer to as “trying to keep the general tenets in mind while thinking about things”.
That said, I suppose there are common practices you could call endorsed methodologies if you were in the mood.
For example, attaching confidence intervals to estimates and predictions is a practice you’ll see a lot around here, with the implied (though not formalized) associated practice of comparing those estimates/predictions with later measurements, and treating an underconfident accurate prediction as a failure of prediction (that is, an event that ought to trigger recalibration).
Great, thanks, this is helpful. Is the answer to the above questions, as far as you practice rationality, the same for instrumental rationality? it is an idea—but no real methodology? in my mind it would seem decision theory could be a methodology by which someone could practice instrumental rationality. To the extent it is, the above questions remain relevant (only in the sense they should be considered,
I now have an appreciation of your point—I can definitely see how the question “what are the flaws with epistemic rationality” could be viewed as an meaningless question—I was thinking about epistemic rationality as more than just an idea—an idea WITH a methodology. Clearly the idea is unassailable (in my mind anyway), but methodologies (whether for rationality or some other purpose) could at least in concept have flaws, or perhaps flaws in that they cannot be applied universally—it was this that I was asking about.
Interestingly, your response raises a different question. If epistemic rationality is an idea, and not a methodology, rationality (as it is discussed here) leaves open the possibility that there could be a methodology that may apply/help with practicing epistemic rationality (i.e. consistent with the IDEA of rationality, but a methodology by which you can practice it).
As I think most appreciate, ideas ( not necessarily with respect to rationality, but generally) suffer from the fact that they are general, and don’t give a user a sense of “what to do”—obviously, getting your map to match reality is not an easy task, so methodologies for epistemic rationality in the abstract could be helpful so as to put the idea to practice.
This is particularly important if you’re practicing instrumental rationality—This type of rationality is practiced “in the world,” so having an accurate (or accurate enough) model is seemingly important to ensure that the manner in which you practice instrumental rationality makes sense.
Thus, a possible shortcoming of instrumental rationality could be that it depends on epistemic rationality, but because there isn’t a clear answer to the question of “what is real,” instrumental rationality is limited to the extent our beliefs regarding “what is real” are actually correct. You could say that instrumental rationality, depending on the circumstances, does not require a COMPLETE understanding of the world, and so my observation, even if fair, must be applied on a sliding scale.
Agreed that it’s a lot easier to talk about flaws in specific methodologies than flaws in broad goals.
Agreed that a decision theory is a methodology by which someone could practice instrumental rationality, and there’s a fair amount of talk around here about what kinds of decision theories are best in what kinds of scenarios. Most of it goes over my head; I don’t really know what it would mean to apply the different decision theories that get talked about here to real-world situations.
Agreed that there could be a methodology that may apply/help with practicing epistemic rationality. Or many of them.
Agreed that in the absence of complete information about the world, our ability to maximize expected value will always be constrained, and that this is a shortcoming of instrumental rationality viewed in isolation. (Not so much when compared to alternatives, since all the alternatives have the same shortcoming.)