The problem of epistemic irresponsibility, as I will call it in this post, is that people sometimes agree that their reasoning (and hence models that they hold and use as the basis of their action) is irrational or illogical. For example, people knowingly use theories that have already been refuted, such as astrology. Yet, they fail to act on this knowledge and don’t update their models and their action plan accordingly.
A “softer” version of this problem arises when people agree that they don’t know if their reasoning is rational, either because they didn’t seek any evidence or didn’t try to come up with any arguments refuting their reasoning or models, or because they don’t know how to know if their theory is true or false, i. e., don’t have the necessary epistemological apparatus (though, these people would seldom think in these terms). Then, people fail to act on this knowledge. The examples are people’s uncritical perception of news in the media or claims made by politicians, and, to some degree, the unsound scientific practices, such as those described in Richard Feynman’s “Cargo Cult Science”.
The problem of epistemic irresponsibility, or, rather, its inverse, epistemic responsibility, belongs to the discipline of normative logic, i. e. the “pre-Frege” logic, as John Sowa highlighted here:
The three kinds of value judgments are Beauty, Goodness, and Truth. They determine the three kinds of normative science: Aesthetics, Ethics, and Normative logic. Peirce equated normative logic with logic as semiotic. But all sciences, including the normative sciences, depend on mathematics and mathematical logic (AKA formal logic).
All empirical sciences, including the normative sciences, depend on phenomenology for the analysis and interpretation of perception. The three parts of normative logic (AKA logic as semiotic) are (1) Critic, which is formal logic; (2) Grammar; and (3) Methodeutic, which is Peirce’s name for the methodology of science.
All these issues were discussed and analyzed in detail by Aristotle, debated for centuries by the Greeks, Romans, and Arabs, and developed to a high level of sophistication by the medieval Scholastics. The books called “logic″ from the 13th to the 19th centuries discussed all these issues. But the 20th c. logicians ignored all but the formal logic. They did a lot of good work on logic, but they also lost a great deal.
That is why I said that they wasted too much time studying Frege—who ignored everything except the formal part.
Epistemic irresponsibility holds people from learning normative logic and rationality
In the context of rationality education, such as at the Center for Applied Rationality, or other programs, epistemic irresponsibility is an obstacle for people to decide they need rationality education at all and therefore read something about epistemology, logic, or rationality, take some courses, etc.
Here, the problem is somewhat circular, because acting on the fact that one doesn’t know normative logic is like already using normative logic!
The practical implication is that it’s hard to convince most people to read something about normative logic and rationality: at best, these will be very low on one’s list of things to study. Their responses would typically be: “I’m a successful person already, why should I spend time on it?”, or “I don’t have time for it, I would rather take that other course which teaches how to program or earn money on the internet.”
Thus, epistemic irresponsibility presents a challenge for spreading logic, rationality, and good science (such as physics) rather than bad science (such as astrology) more generally in the population.
Clearly the role of humans will be quite different from what it has traditionally been, but many of you will insist on old theories you were taught long ago as if they would be automatically true in the long future. It will be the same in business, much of what is now taught is based on the past, and has ignored the computer revolution and our responses to some of the evils the revolution has brought; the gains are generally clear to management, the evils are less so.
How much the trends, predicted in part 6 above [some trends in engineering, induced by the advent of computers], toward and away from micromanagement will apply widely and is again a topic best left to you—but you will be a fool if you do not give it your deep and constant attention. I suggest you must rethink everything you ever learned on the subject, question every successful doctrine from the past, and finally decide for yourself its future applicability. The Buddha told his disciples, “Believe nothing, no matter where you read it, or who said it, no matter if I have said it, unless it agrees with your own reason and your own common sense”. I say the same to you—you must assume the responsibility for what you believe [author’s emphasis].
The Essence of Active Reading: The Four Basic Questions a Reader Asks […] 1. What is the book about as a whole? […] 2. What is being said in detail, and how? [...] 3. Is the book true, in whole or part? […] 4. What of it? If the book has given you information, you must ask about its significance. Why does the author think it is important to know these things? Is it important to you to know them? And if the book has not only informed you, but also enlightened you, it is necessary to seek further enlightenment by asking what else follows, what is further implied or suggested. […]
[…] The last question—What of it?—is probably the most important one in syntopical reading. Naturally, you will have to answer the first three questions before attempting the final one.
Knowing what the four questions are is not enough. You must remember to ask them as you read. The habit of doing that is the mark of a demanding reader. More than that, you must know how to answer them precisely and accurately. The trained ability to do that is the art of reading.
And then, in another place in the book:
The fourth question, What of it?, is changed most of all. If, after reading a theoretical book, your view of its subject matter is altered more or less, then you are required to make some adjustments in your general view of things. (If no adjustments are called for, then you cannot have learned much, if anything, from the book.) But these adjustments need not be earth-shaking, and above all they do not necessarily imply action on your part.
Agreement with a practical book, however, does imply action on your part. If you are convinced or persuaded by the author that the ends he, proposes are worthy, and if you are further convinced or persuaded that the means he recommends are likely to achieve those ends, then it is hard to see how you can refuse to act in the way the author wishes you to.
We recognize, of course, that this does not always happen. But we want you to realize what it means when it does not. It means that despite his apparent agreement with the author’s ends and acceptance of his means, the reader really does not agree or accept. If he did both, he could not reasonably fail to act. Let us give an example of what we mean. If, after completing Part Two of this book, you (1) agreed that reading analytically is worthwhile, and (2) accepted the rules of reading as essentially supportive of that aim, then you must have begun to try to read in the manner we have described. If you did not, it is not just because you were lazy or tired. It is because you did not really mean either (1) or (2). […]
This, of course, is not primarily a reading problem but rather a psychological one. Nevertheless, the psychological fact has bearing on how effectively we read a practical book, and so we have discussed the matter here.
Third, David Deutsch’s idea of “taking theories which haven’t yet been refuted seriously”, is a major strand in his book The Fabric of Reality. This is the very beginning of the book:
Dedicated to the memory of Karl Popper, Hugh Everett and Alan Turing, and to Richard Dawkins. This book takes their ideas seriously.
Preface If there is a single motivation for the world-view set out in this book, it is that thanks largely to a succession of extraordinary scientific discoveries, we now possess some extremely deep theories about the structure of reality. If we are to understand the world on more than a superficial level, it must be through those theories and through reason, and not through our preconceptions, received opinion or even common sense. Our best theories are not only truer than common sense, they make far more sense than common sense does. We must take them seriously [emphasis mine — Roman Leventov], not merely as pragmatic foundations for their respective fields but as explanations of the world.
Forth, a somewhat related idea is Nassim Taleb’s epistemology of risk-taking. In Antifragile, he wrote “You gotta do so if you have an opinion. Live what you say.” He also says that “true virtue is risk-taking”. The idea is that actors with wrong ideas will “die”, metaphorically or literally, and thus good, robust knowledge will survive through the evolution of actors carrying these ideas: people, companies, and countries.
The source of the problem is the energetic cost of updating mental and embodied models
Here, I propose an explanation of epistemic irresponsibility in terms of Active Inference.
The expected free energy formula includes only the “information gain” associated with the basic discriminative model and the “pragmatic value” associated with the basic generative model of the agent. An Active Inference agent strives to minimise the expected free energy through their model updates and actions.
However, if the expected free energy formula elaborates on model hierarchy and tries to capture model learning (aka meta-learning), it should include extra terms. First, it should include the information gain on the model parameters (or model structure) updates: see chapter 7, section 7.5 “Learning and Novelty” in Active Inference. Second, the costs of updating a model should be taken into account, and they can be large. There is a neurological cost: everyone knows that “unlearning” something can be even harder than learning it in the first place, as well as energetic and economic costs in the external world, due to the fact that a person has arranged something in their life (which can be seen as their extended phenotype) according to their models: e. g., they started a business, believing in the prospects of a certain idea, or moved to a certain place, believing that the climate of that place would be good to their health. Undoing this and rearranging one’s life is costly.
There have also been explanations of epistemic irresponsibility in terms of cognitive biases, or psychology, e. g. that epistemic irresponsibility is a sort of ego-protecting reaction of people to a question like “Did you consider taking a course in rationality?”, which may be perceived as an attack or an insult, because it may imply that the addressed person is irrational. However, I suspect that these explanations are less fundamental than those in terms of the energetic costs of updating one’s mental and embodied models.
Thus, if the explanation is correct, the agent should observe the violation of mind- and embodiment-agnostic normative logic, and that it is sometimes rational to keep acting guided by outdated models, which sometimes yield incorrect predictions, because the energetic cost of updating the model immediately may be higher than the cost entailed by the incorrect predictions of the model, at least, temporarily.
For example, if a person who is an astrology teacher is enlightened that astrology is a pseudoscience, they may decide that it’s rational to at least complete an astrology workshop they are currently conducting. If they announce that the workshop is cancelled immediately, they will need to return all the money already collected from workshop participants, and also bear various extra costs, from dealing with legal issues to suffering the psychological discomfort of explaining their behaviour to the workshop participants. Note that we may consider the decision of the astrology teacher (to keep the workshop going) rational but unethical. Including or excluding ethics in the scope of rationality is a matter of terminological convention.
Knowing the root cause of epistemic irresponsibility and reflecting on the cases when one holds “temporarily irrational” beliefs is not enough to overcome this disease in oneself. One should also have a discipline to regularly revisit these problematic places in one’s models and decide when the energetic cost of holding an inaccurate belief starts to outweigh the cost of updating it. Such a discipline is reminiscent of the “habit of remembering asking questions (including What of it?)”, and the “trained ability to answer them precisely and accurately”, which Adler wrote about in How to Read a Book.
This “recipe” for overcoming epistemic irresponsibility doesn’t help to break the circular nature of this problem and thus solve the “rationality outreach challenge”, because the explanation itself requires some understanding of logic, ontology, epistemology, and cognitive science. Epistemic irresponsibility holds people from learning the requisite bits of these knowledge domains in the first place.
Epistemic irresponsibility of companies, communities, societies, and the civilisation
We can think of epistemic irresponsibility applying not only on the personal level but also at higher system levels. Whether or not companies, communities, societies, and the civilisation should be considered agents, it seems reasonable to at least consider them cognitive entities or “information processing units”, and thus subjects to normative logic and rationality.
I would speculate that the societal and political hurdles to addressing climate change (such as climate change denial), which started 40-50 years ago and continue to this day, are an example of epistemic irresponsibility on the societal and civilisational levels.
How to crack the circular nature of epistemic irresponsibility?
I haven’t researched this topic and don’t have any ideas off the top of my head for how to motivate people to study normative logic and rationality, if they respond “I don’t need it, I’m good with where I am” to polite suggestions or advertisements.
I would be grateful if readers shared their experiences or the strategies used at CFAR and similar programs to deal with this challenge.
The circular problem of epistemic irresponsibility
Problem statement
The problem of epistemic irresponsibility, as I will call it in this post, is that people sometimes agree that their reasoning (and hence models that they hold and use as the basis of their action) is irrational or illogical. For example, people knowingly use theories that have already been refuted, such as astrology. Yet, they fail to act on this knowledge and don’t update their models and their action plan accordingly.
A “softer” version of this problem arises when people agree that they don’t know if their reasoning is rational, either because they didn’t seek any evidence or didn’t try to come up with any arguments refuting their reasoning or models, or because they don’t know how to know if their theory is true or false, i. e., don’t have the necessary epistemological apparatus (though, these people would seldom think in these terms). Then, people fail to act on this knowledge. The examples are people’s uncritical perception of news in the media or claims made by politicians, and, to some degree, the unsound scientific practices, such as those described in Richard Feynman’s “Cargo Cult Science”.
The problem of epistemic irresponsibility, or, rather, its inverse, epistemic responsibility, belongs to the discipline of normative logic, i. e. the “pre-Frege” logic, as John Sowa highlighted here:
Epistemic irresponsibility holds people from learning normative logic and rationality
In the context of rationality education, such as at the Center for Applied Rationality, or other programs, epistemic irresponsibility is an obstacle for people to decide they need rationality education at all and therefore read something about epistemology, logic, or rationality, take some courses, etc.
Here, the problem is somewhat circular, because acting on the fact that one doesn’t know normative logic is like already using normative logic!
The practical implication is that it’s hard to convince most people to read something about normative logic and rationality: at best, these will be very low on one’s list of things to study. Their responses would typically be: “I’m a successful person already, why should I spend time on it?”, or “I don’t have time for it, I would rather take that other course which teaches how to program or earn money on the internet.”
Thus, epistemic irresponsibility presents a challenge for spreading logic, rationality, and good science (such as physics) rather than bad science (such as astrology) more generally in the population.
Recognition in the literature, and related ideas
First, I derive the name of the problem, “epistemic irresponsibility”, from this passage by Richard Hamming in The Art of Doing Science and Engineering:
Second, Mortimer Adler in How to Read a Book:
And then, in another place in the book:
Third, David Deutsch’s idea of “taking theories which haven’t yet been refuted seriously”, is a major strand in his book The Fabric of Reality. This is the very beginning of the book:
Forth, a somewhat related idea is Nassim Taleb’s epistemology of risk-taking. In Antifragile, he wrote “You gotta do so if you have an opinion. Live what you say.” He also says that “true virtue is risk-taking”. The idea is that actors with wrong ideas will “die”, metaphorically or literally, and thus good, robust knowledge will survive through the evolution of actors carrying these ideas: people, companies, and countries.
The source of the problem is the energetic cost of updating mental and embodied models
Here, I propose an explanation of epistemic irresponsibility in terms of Active Inference.
The expected free energy formula includes only the “information gain” associated with the basic discriminative model and the “pragmatic value” associated with the basic generative model of the agent. An Active Inference agent strives to minimise the expected free energy through their model updates and actions.
However, if the expected free energy formula elaborates on model hierarchy and tries to capture model learning (aka meta-learning), it should include extra terms. First, it should include the information gain on the model parameters (or model structure) updates: see chapter 7, section 7.5 “Learning and Novelty” in Active Inference. Second, the costs of updating a model should be taken into account, and they can be large. There is a neurological cost: everyone knows that “unlearning” something can be even harder than learning it in the first place, as well as energetic and economic costs in the external world, due to the fact that a person has arranged something in their life (which can be seen as their extended phenotype) according to their models: e. g., they started a business, believing in the prospects of a certain idea, or moved to a certain place, believing that the climate of that place would be good to their health. Undoing this and rearranging one’s life is costly.
I believe more formal variations on this explanation can be found in “An information-theoretic perspective on the costs of cognition” (Zenon et al. 2019) and “Resource-rational analysis: Understanding human cognition as the optimal use of limited computational resources” (Lieder and Griffiths 2019).
There have also been explanations of epistemic irresponsibility in terms of cognitive biases, or psychology, e. g. that epistemic irresponsibility is a sort of ego-protecting reaction of people to a question like “Did you consider taking a course in rationality?”, which may be perceived as an attack or an insult, because it may imply that the addressed person is irrational. However, I suspect that these explanations are less fundamental than those in terms of the energetic costs of updating one’s mental and embodied models.
Overcoming epistemic irresponsibility takes discipline
Thus, if the explanation is correct, the agent should observe the violation of mind- and embodiment-agnostic normative logic, and that it is sometimes rational to keep acting guided by outdated models, which sometimes yield incorrect predictions, because the energetic cost of updating the model immediately may be higher than the cost entailed by the incorrect predictions of the model, at least, temporarily.
For example, if a person who is an astrology teacher is enlightened that astrology is a pseudoscience, they may decide that it’s rational to at least complete an astrology workshop they are currently conducting. If they announce that the workshop is cancelled immediately, they will need to return all the money already collected from workshop participants, and also bear various extra costs, from dealing with legal issues to suffering the psychological discomfort of explaining their behaviour to the workshop participants. Note that we may consider the decision of the astrology teacher (to keep the workshop going) rational but unethical. Including or excluding ethics in the scope of rationality is a matter of terminological convention.
Knowing the root cause of epistemic irresponsibility and reflecting on the cases when one holds “temporarily irrational” beliefs is not enough to overcome this disease in oneself. One should also have a discipline to regularly revisit these problematic places in one’s models and decide when the energetic cost of holding an inaccurate belief starts to outweigh the cost of updating it. Such a discipline is reminiscent of the “habit of remembering asking questions (including What of it?)”, and the “trained ability to answer them precisely and accurately”, which Adler wrote about in How to Read a Book.
This “recipe” for overcoming epistemic irresponsibility doesn’t help to break the circular nature of this problem and thus solve the “rationality outreach challenge”, because the explanation itself requires some understanding of logic, ontology, epistemology, and cognitive science. Epistemic irresponsibility holds people from learning the requisite bits of these knowledge domains in the first place.
Epistemic irresponsibility of companies, communities, societies, and the civilisation
We can think of epistemic irresponsibility applying not only on the personal level but also at higher system levels. Whether or not companies, communities, societies, and the civilisation should be considered agents, it seems reasonable to at least consider them cognitive entities or “information processing units”, and thus subjects to normative logic and rationality.
I would speculate that the societal and political hurdles to addressing climate change (such as climate change denial), which started 40-50 years ago and continue to this day, are an example of epistemic irresponsibility on the societal and civilisational levels.
How to crack the circular nature of epistemic irresponsibility?
I haven’t researched this topic and don’t have any ideas off the top of my head for how to motivate people to study normative logic and rationality, if they respond “I don’t need it, I’m good with where I am” to polite suggestions or advertisements.
I would be grateful if readers shared their experiences or the strategies used at CFAR and similar programs to deal with this challenge.