The terminology is a bit new to me, but it seems to me epistemic and instrumental rationality are necessarily identical.
If epistemic rationality is implementation of any of a set of reliable procedures for making true statements about reality, and instrumental rationality is use of any of a set of reliable procedures for achieving goals, then the latter is contained in the former, since reliably achieving goals entails possession of some kind of high-fidelity model of reality.
Furthermore, what kind of rationality does not pursue goals? If I have no interest in chess, and ability to play chess will have no impact on any of my present or future goals, then it would seem to be irrational of me to learn to play chess.
Loosely speaking, epistemic and instrumental rationality are prescriptions for the two sides of the is/ought gap. While ‘ought statements’ generally need to make reference to ‘is statements’, they cannot be entirely reduced to them.
If epistemic rationality is implementation of any of a set of reliable procedures for making true statements about reality, and instrumental rationality is use of any of a set of reliable procedures for achieving goals, then the latter is contained in the former, since reliably achieving goals entails possession of some kind of high-fidelity model of reality.
One possible goal is to have false beliefs about reality; another is to have no impact on reality. (For humans in particular, there are unquestionably some facts that are both true and harmful (i.e. instrumentally irrational) to learn.)
Furthermore, what kind of rationality does not pursue goals?
Epistemic rationality.
(I assume that you mean ‘isn’t about pursuing goals.’ Otherwise, epistemic rationality might pursue the goal of matching the map to the territory.)
Perhaps some explanation is in order. (I thought it was quite a witty thought experiment, but apparently it’s not appreciated.)
If it is in principle impossible to explain why one ought to do something, then what is the function of the word “ought”? Straightforwardly, it can have none, and we gain nothing by its existence in our vocabulary.
Alternatively, if it is not in principle impossible, then trivially the condition ‘ought’ (the condition of oughting?) rests entirely upon real facts about the universe, and the position of Randaly is false.
I know there is some philosophical pedigree behind this old notion, but my investigations yield that it is not possible, under valid reasoning (without butchering the word ‘ought’), to assert that ought statements cannot be entirely reduced to is statements, and simultaneously to assert that one ought to believe this, which seems to present a dilemma.
I’m glad that Randaly explicitly chose this way of reasoning, as it is intimately linked with my interest in commenting on this post. Everyone accepts that questions relating to the life cycles of stars are questions of fact about the universe (questions of epistemic rationality), but the philosophical pedigree rejects the idea that questions about what is an appropriate way for a person to behave are similar (instrumental rationality) - it seems that people are somehow not part of the universe, according to this wisdom.
If it is in principle impossible to explain why one ought to do something, then what is the function of the word “ought”? Straightforwardly, it can have none, and we gain nothing by its existence in our vocabulary.
People are more complicated than you’re modeling them as. People have numerous conflicting urges/desires/values/modules. Classicists would say that ‘ought’ refers to going with the virtuous action; Freudians the superego; Hansonians your far-mode. All of these groups would separately endorse the interactionist (psychology) viewpoint that ‘ought’ also refers to social pressures to take pro-social actions.
(On a side note: it is completely possible to explain why one ought to do something; it merely requires that a specific morality be taken as a given. In practice, all humans’ morality tends to be similar, especially in the same culture; and since our morality is not exactly like a utility function, in so far as it has conflicting, non-instrospectively available and changing parts, moral debate is still possible.)
simultaneously to assert that one ought to believe this
Well, yes, one would need the additional claim that one ought to believe the truth. Among humans, for specific cases, this usually goes without saying.
Everyone accepts that questions relating to the life cycles of stars are questions of fact about the universe (questions of epistemic rationality), but the philosophical pedigree rejects the idea that questions about what is an appropriate way for a person to behave are similar (instrumental rationality) - it seems that people are somehow not part of the universe, according to this wisdom.
No. Would you also argue that you universe is not part of the universe, because some people think it’s pretty and others don’t?
You explain how you learned skills of instrumental rationality from debating, but in doing so, you also learned reliable answers to questions of fact about the universe: how to win debates. When I’m learning electrostatics I learn that charges come with different polarities. If I later learn about gravity, and that gravitationally everything attracts, this doesn’t make the electrostatics wrong! Similarly your debating skills were not wrong, just not the same skills you needed for writing research papers.
Regarding Kelly 2003, I’d argue that learning movie spoilers is only desirable, by definition, if it contributes to one’s goals. If it is not desriable, then I contend that it isn’t rational, in any way.
Regarding Bostrom 2011, you say he demonstrates that, “a more accurate model of the world can be hazardous to various instrumental objectives.” I absolutely agree. But if we have reliable reasons to expect that some knowledge would be dangerous, then it is not rational to seek this knowledge.
Thus, I’m inclined to reject your conclusion that epistemic and instrumental rationality can come into conflict, and to reject the proposition that they are different.
(I note that whoever wrote the wiki entry on rationality was quite careful, writing
Epistemic rationality is that part of rationality which involves achieving accurate beliefs about the world.
The use of “involves” instead of e.g. “consists entirely of” is crucial, as the latter would not normally describe a part of rationality.)
When I’m learning electrostatics I learn that charges come with different polarities. If I later learn about gravity, and that gravitationally everything attracts, this doesn’t make the electrostatics wrong! Similarly your debating skills were not wrong, just not the same skills you needed for writing research papers.
In a vacuum, this is certainly true and in fact I agree with all of your points. But I believe that human cognitive biases make this sort of compartmentalization between mental skillsets more difficult than one might otherwise expect. As the old saying goes, “To a man with a hammer, everything looks like a nail.”
It would be fair to say that I believe tradeoffs between epistemic and instrumental rationality exist only thanks to quirks in human reasoning—however, I also believe that we need to take those quirks into account.
The terminology is a bit new to me, but it seems to me epistemic and instrumental rationality are necessarily identical.
If epistemic rationality is implementation of any of a set of reliable procedures for making true statements about reality, and instrumental rationality is use of any of a set of reliable procedures for achieving goals, then the latter is contained in the former, since reliably achieving goals entails possession of some kind of high-fidelity model of reality.
Furthermore, what kind of rationality does not pursue goals? If I have no interest in chess, and ability to play chess will have no impact on any of my present or future goals, then it would seem to be irrational of me to learn to play chess.
Loosely speaking, epistemic and instrumental rationality are prescriptions for the two sides of the is/ought gap. While ‘ought statements’ generally need to make reference to ‘is statements’, they cannot be entirely reduced to them.
One possible goal is to have false beliefs about reality; another is to have no impact on reality. (For humans in particular, there are unquestionably some facts that are both true and harmful (i.e. instrumentally irrational) to learn.)
Epistemic rationality.
(I assume that you mean ‘isn’t about pursuing goals.’ Otherwise, epistemic rationality might pursue the goal of matching the map to the territory.)
Please explain why this is so. Then please explain why you ought to believe this.
Perhaps some explanation is in order. (I thought it was quite a witty thought experiment, but apparently it’s not appreciated.)
If it is in principle impossible to explain why one ought to do something, then what is the function of the word “ought”? Straightforwardly, it can have none, and we gain nothing by its existence in our vocabulary.
Alternatively, if it is not in principle impossible, then trivially the condition ‘ought’ (the condition of oughting?) rests entirely upon real facts about the universe, and the position of Randaly is false.
I know there is some philosophical pedigree behind this old notion, but my investigations yield that it is not possible, under valid reasoning (without butchering the word ‘ought’), to assert that ought statements cannot be entirely reduced to is statements, and simultaneously to assert that one ought to believe this, which seems to present a dilemma.
I’m glad that Randaly explicitly chose this way of reasoning, as it is intimately linked with my interest in commenting on this post. Everyone accepts that questions relating to the life cycles of stars are questions of fact about the universe (questions of epistemic rationality), but the philosophical pedigree rejects the idea that questions about what is an appropriate way for a person to behave are similar (instrumental rationality) - it seems that people are somehow not part of the universe, according to this wisdom.
People are more complicated than you’re modeling them as. People have numerous conflicting urges/desires/values/modules. Classicists would say that ‘ought’ refers to going with the virtuous action; Freudians the superego; Hansonians your far-mode. All of these groups would separately endorse the interactionist (psychology) viewpoint that ‘ought’ also refers to social pressures to take pro-social actions.
(On a side note: it is completely possible to explain why one ought to do something; it merely requires that a specific morality be taken as a given. In practice, all humans’ morality tends to be similar, especially in the same culture; and since our morality is not exactly like a utility function, in so far as it has conflicting, non-instrospectively available and changing parts, moral debate is still possible.)
Well, yes, one would need the additional claim that one ought to believe the truth. Among humans, for specific cases, this usually goes without saying.
No. Would you also argue that you universe is not part of the universe, because some people think it’s pretty and others don’t?
Here’s a brief post I wrote about tradeoffs between epistemic and instrumental rationality.
Thanks for bringing that article to my attention.
You explain how you learned skills of instrumental rationality from debating, but in doing so, you also learned reliable answers to questions of fact about the universe: how to win debates. When I’m learning electrostatics I learn that charges come with different polarities. If I later learn about gravity, and that gravitationally everything attracts, this doesn’t make the electrostatics wrong! Similarly your debating skills were not wrong, just not the same skills you needed for writing research papers.
Regarding Kelly 2003, I’d argue that learning movie spoilers is only desirable, by definition, if it contributes to one’s goals. If it is not desriable, then I contend that it isn’t rational, in any way.
Regarding Bostrom 2011, you say he demonstrates that, “a more accurate model of the world can be hazardous to various instrumental objectives.” I absolutely agree. But if we have reliable reasons to expect that some knowledge would be dangerous, then it is not rational to seek this knowledge.
Thus, I’m inclined to reject your conclusion that epistemic and instrumental rationality can come into conflict, and to reject the proposition that they are different.
(I note that whoever wrote the wiki entry on rationality was quite careful, writing
The use of “involves” instead of e.g. “consists entirely of” is crucial, as the latter would not normally describe a part of rationality.)
In a vacuum, this is certainly true and in fact I agree with all of your points. But I believe that human cognitive biases make this sort of compartmentalization between mental skillsets more difficult than one might otherwise expect. As the old saying goes, “To a man with a hammer, everything looks like a nail.”
It would be fair to say that I believe tradeoffs between epistemic and instrumental rationality exist only thanks to quirks in human reasoning—however, I also believe that we need to take those quirks into account.