A common criticism of rationality I come across rests upon the absence of a single, ultimate theory of rationality.
Their claim: the various theories of rationality offer differing assertions about reality and, thus, differing predictions of experiences.
Their conclusion: Convergence on objective truth is impossible, and rationality is subjective. (Which I think is a false conclusion to draw).
I think that this problem is congruent to Moral Uncertainty. What is the solution to this problem? Does a parliamentary model similar to that proposed by Bostrom and Ord make sense here? I am sure this problem has been talked about on LessWrong or elsewhere. Please direct me to where I can learn more about this!
I would like to improve my argument against the aforementioned conclusion. I would like to understand this problem
I would like to improve my argument against the aforementioned conclusion.
An unrelated musing: Improving arguments for a particular side is dangerous, but I think a safe alternative is improving gears for a particular theory. The difference is that refinement of a theory is capable of changing its predictions in unanticipated ways. This can well rob it of credence as it’s balanced against other theories through prediction of known facts.
In another way, gears more directly influence understanding of what a theory says and predicts, the internal hypothetical picture, not its credence, the relation of the theory to reality. So they can be a safe enough distance above the bottom line not to be mangled by it, and have the potential to force it to change, even if it’s essentially written down in advance.
I should have worded that last sentence differently. I agree with you that the way I phrased it sounds like I have written at the bottom of my sheet of paper ¬Conclusion.
I am interested in a solution to the problem. There exist several theories of epistemology and decision theory and we do now know which is “right.” Would a parliamentary approach solve this problem?
This is not an answer to my question but a follow-up elaboration.
This quote by Jonathan Rauch from The Constitution of Knowledge attempts to address this problem:
Francis Bacon and his followers said that scientific inquiry is characterized by experimentation; logical positivists, that it is characterized by verification; Karl Popper and his followers, by falsification. All of them were right some of the time, but not always. The better generalization, perhaps the only one broad enough to capture most of what reality-based inquirers do, is that liberal science is characterized by orderly, decentralized, and impersonal social adjudication. Can the marketplace of persuasion reach some sort of stable conclusion about a proposition, or tackle it in an organized, consensual way? If so, the proposition is grist for the reality-based community, whether or not a clear consensus is reached.
However, I don’t find it satisfying. Rauch focuses on persuasion and ignores explanatory power. It reminds me of this claim from The Enigma of Reason, stating:
Whereas reason is commonly viewed as a superior means to think better on one’s own, we argue that it is mainly used in our interactions with others. We produce reasons in order to justify our thoughts and actions to others and to produce arguments to convince others to think and act as we suggest.
I will stake a strong claim: lasting persuasion is the byproduct of good explanations. Assertions that achieve better map-territory convergence or are more effective at achieving goals tend to be more persuasive in the long run. Galileo’s claim that the Earth moved around the Sun was not persuasive in his day. Still, it has achieved lasting persuasion because it is a map that reflects the territory more accurately than preceding theories.
It might very well be the case that the competing theories of rationality all boil down to Bayesian optimality, i.e., generating hypotheses and updating the map based on evidence. However, not everyone is satisfied with that theory. I keep seeing the argument that rationality is subjective because there isn’t a single theory, and therefore convergence on a shared understanding of reality is impossible.
A parliamentary model with delegates corresponding to the competing theories being proportional to some metric (e.g. track record of prediction accuracy?) explicitly asserts that rationality is not dogmatic; rationality is not contingent on the existence of a single, ultimate theory. This way, the aforementioned arguments against rationality dissolve in their own contradictions.
Rationality is the quality of ingredients of cognition that work well. As long as we don’t have cognition figured out, including sufficiently general formal agents based on decision theory that’s at the very least not in total disarray, there is also no clear notion of rationality. There’s only the open problem of what it should be, some conjectures as to the shape it might take, and particular examples of cognitive tools that seem to work.
A common criticism of rationality I come across rests upon the absence of a single, ultimate theory of rationality.
Their claim: the various theories of rationality offer differing assertions about reality and, thus, differing predictions of experiences.
Their conclusion: Convergence on objective truth is impossible, and rationality is subjective. (Which I think is a false conclusion to draw).
I think that this problem is congruent to Moral Uncertainty. What is the solution to this problem? Does a parliamentary model similar to that proposed by Bostrom and Ord make sense here? I am sure this problem has been talked about on LessWrong or elsewhere. Please direct me to where I can learn more about this!
I would like to improve my argument against the aforementioned conclusion. I would like to understand this problemAn unrelated musing: Improving arguments for a particular side is dangerous, but I think a safe alternative is improving gears for a particular theory. The difference is that refinement of a theory is capable of changing its predictions in unanticipated ways. This can well rob it of credence as it’s balanced against other theories through prediction of known facts.
In another way, gears more directly influence understanding of what a theory says and predicts, the internal hypothetical picture, not its credence, the relation of the theory to reality. So they can be a safe enough distance above the bottom line not to be mangled by it, and have the potential to force it to change, even if it’s essentially written down in advance.
Thank you for the thoughtful response Vladimir.
I should have worded that last sentence differently. I agree with you that the way I phrased it sounds like I have written at the bottom of my sheet of paper ¬Conclusion.
I am interested in a solution to the problem. There exist several theories of epistemology and decision theory and we do now know which is “right.” Would a parliamentary approach solve this problem?
This is not an answer to my question but a follow-up elaboration.
This quote by Jonathan Rauch from The Constitution of Knowledge attempts to address this problem:
However, I don’t find it satisfying. Rauch focuses on persuasion and ignores explanatory power. It reminds me of this claim from The Enigma of Reason, stating:
I will stake a strong claim: lasting persuasion is the byproduct of good explanations. Assertions that achieve better map-territory convergence or are more effective at achieving goals tend to be more persuasive in the long run. Galileo’s claim that the Earth moved around the Sun was not persuasive in his day. Still, it has achieved lasting persuasion because it is a map that reflects the territory more accurately than preceding theories.
It might very well be the case that the competing theories of rationality all boil down to Bayesian optimality, i.e., generating hypotheses and updating the map based on evidence. However, not everyone is satisfied with that theory. I keep seeing the argument that rationality is subjective because there isn’t a single theory, and therefore convergence on a shared understanding of reality is impossible.
A parliamentary model with delegates corresponding to the competing theories being proportional to some metric (e.g. track record of prediction accuracy?) explicitly asserts that rationality is not dogmatic; rationality is not contingent on the existence of a single, ultimate theory. This way, the aforementioned arguments against rationality dissolve in their own contradictions.
Rationality is the quality of ingredients of cognition that work well. As long as we don’t have cognition figured out, including sufficiently general formal agents based on decision theory that’s at the very least not in total disarray, there is also no clear notion of rationality. There’s only the open problem of what it should be, some conjectures as to the shape it might take, and particular examples of cognitive tools that seem to work.