You can apply the same idea (about the “common pool”) to hypotheses and argumentation:
You can describe a hypothesis in terms of any other hypothesis. You also can simplify it along the way (let’s call it “regularization”). Recursion and circularity is possible in reasoning.
Truth isn’t attached to a specific hypothesis. Instead there’s a common “pool of truth”. Different hypotheses take different parts of the whole truth. The question isn’t “Is the hypothesis true?”, the question is “How true is the hypothesis compared to others?” And if the hypotheses are regularized it can’t be too wrong.
Alternatively: “implications” of a specific hypothesis aren’t attached to it. Instead there’s a common “pool of implications”. Different hypotheses take different parts of “implications”.
Conservation of implications: if implications of a hypothesis are simple enough, they remain true/likely even if the hypothesis is wrong. You can shift the implications to a different hypothesis, but you’re very unlikely to completely dissolve them.
In usual rationality (hypotheses don’t share truth) you try to get the most accurate opinions about every single thing in the world. You’re “greedy”. But in this approach (hypotheses do share truth) it doesn’t matter how wrong you are about everything unless you’re right about “the most important thing”. But once you’re proven right about “the most important thing”, you know everything. A billion wrongs can make a right. Because any wrong opinion is correlated with the ultimate true opinion, the pool of the entire truth.
You can’t prove a hypothesis to be “too bad” because it would harm all other hypotheses. Because all hypotheses are correlated, created by each other. When you keep proving something wrong the harm to other hypotheses grows exponentially.
Motivated reasoning is valid: truth of a hypothesis depends on context, on the range of interests you choose. Your choice affects the truth.
Any theory is the best (or even “the only one possible”) on its level of reality. For example, on a certain level of reality modern physics doesn’t explain weather better than gods of weather.
In a way it means that specific hypotheses/beliefs just don’t exist, they’re melted into a single landscape. It may sound insane (“everything is true at the same time and never proven wrong” and also relative!). But human language, emotions, learning, pattern-matching and research programs often work like this. It’s just a consequence of ideas (1) not being atomic statements about the world and (2) not being focused on causal reasoning, causal modeling. And it’s rational to not start with atomic predictions when you don’t have enough evidence to locate atomic hypotheses.
Causal rationality, Descriptive rationality
You can split rationality into 2 components. The second component isn’t explored. My idea describes the second component:
Causal rationality. Focused on atomic independent hypotheses about the world. On causal explanations, causal models. Answers “WHY this happens?”. Goal: to describe a specific reality in terms of outcomes.
Descriptive rationality. Focused on fuzzy and correlated hypotheses about the world. On patterns and analogies. Answers “HOW this happens?”. Goal: to describe all possible (and impossible) realities in terms of each other.
Causal and Descriptive rationality work according to different rules. Causal uses Bayesian updating. Descriptive uses “the common pool of properties + Bayesian updating”, maybe.
“Map is not the territory” is true for Causal rationality. It’s wrong for Descriptive rationality: every map is a layer of reality.
“Uncertainty and confusion is a part of the map, not the territory”. True for Causal rationality. Wrong for Descriptive rationality: the possibility of an uncertainty/confusion is a property of reality.
“Details make something less likely, not more” (Conjunction fallacy). True for Causal rationality. Wrong for Descriptive rationality: details are not true or false by themselves, they “host” kernels of truth, more details may accumulate more truth.
For Causal rationality, math is the ideal of specificity. For Descriptive rationality, math has nothing to do with specificity: an idea may have different specificity on different layers of reality.
In Causal rationality, hypotheses should constrain outcomes, shouldn’t explain any possible outcome. In Descriptive rationality… constraining depends on context.
Causal rationality often conflicts with people. Descriptive rationality tries to minimize the conflict. I believe it’s closer to how humans think.
Causal rationality assumes that describing reality is trivial and should be abandoned as soon as possible. Only (new) predictions matter.
In Descriptive rationality, a hypothesis is somewhat equivalent to the explained phenomenon. You can’t destroy a hypothesis too much without destroying your knowledge about the phenomenon itself. It’s like hitting a nail so hard that you destroy the Earth.
Example:Vitalism. It was proven wrong in causal terms. But in descriptive terms it’s almost entirely true. Living matter does behave very differently from non-living matter. Living matter does have a “force” that non-living matter doesn’t have (it’s just not a fundamental force). Many truths of vitalism were simply split into different branches of science: living matter is made out of special components (biology/microbiology) including nanomachines/computers!!! (DNA, genetics), can have cognition (psychology/neuroscience), can be a computer (computer science), can evolve (evolutionary biology), can do something like “decreasing entropy” (an idea by Erwin Schrödinger, see entropy and life). On the other hand, maybe it’s bad that vitalism got split into so many different pieces. Maybe it’s bad that vitalism failed to predict reductionism. However, behaviorism did get overshadowed by cognitive science (living matter did turn out to be more special than it could be). Our judgement of vitalism depends on our choices, but at worst vitalism is just the second best idea. Or the third best idea compared to some other version of itself… Absolute death of vitalism is astronomically unlikely and it would cause most of reductionism and causality to die too along with most of our knowledge about the world. Vitalism partially just restates our knowledge (“living matter is different from non-living”), so it’s strange to simply call it wrong. It’s easier to make vitalism better than to disprove it.
Perhaps you could call the old version of vitalism “too specific given the information about the world”: why should “life-like force” be beyond laws of physics? But even this would be debatable at the time. By the way, the old sentiment “Science is too weak to explain living things” can be considered partially confirmed: 19th century science lacked a bunch of conceptual breakthroughs. And “only organisms can make the components of living things” is partially just a fact of reality: skin and meat don’t randomly appear in nature. This fact was partially weakened, but also partially strengthened with time. The discovery of DNA strengthened it in some ways. It’s easy to overlook all of those things.
In Descriptive rationality, an idea is like a river. You can split it, but you can’t stop it. And it doesn’t make sense to fight the river with your fists: just let it flow around you. However, if you did manage to split the river into independent atoms, you get Causal rationality.
2 types of rationality should be connected
I think causal rationality has some problems and those problems show that it has a missing component:
Rationality is criticized for dealing with atomic hypotheses about the world. For not saying how to generate new hypotheses and obtain new knowledge. Example: critique by nostalgebraist. See “8. The problem of new ideas”
You can’t use causal rationality to be critical of causal rationality. In theory you should be able to do it, but in practice people often don’t do it. And causal rationality doesn’t model argumentation, even for the most important topics such as AI safety. So we end up arguing like anyone argues.
Doomsday argument, Pascal’s mugging. Probability starts to behave weird when we add large numbers of (irrelevant) things to our world.
The problem of modesty. Should you assume that you’re just an average person?
Causal rationality doesn’t give/justify an ethical theory. Doesn’t say how to find it if you want to find it.
Causal rationality doesn’t give/justify a decision theory. There’s a problem with logical uncertainty (uncertainty about implications of beliefs).
I’m not saying that all of this is impossible to solve with Causal rationality. I’m saying that Causal rationality doesn’t give any motivation to solve all of this. When you’re trying to solve it without motivation you kind of don’t know what you’re doing. It’s like trying to write a program in bytecode without having high-level concepts even in your mind. Or like trying to ride an alien device in the dark: you don’t know what you’re doing and you don’t know where you’re doing.
What and where are we doing when we’re trying to fix rationality?
Argumentation, hypotheses
You can apply the same idea (about the “common pool”) to hypotheses and argumentation:
You can describe a hypothesis in terms of any other hypothesis. You also can simplify it along the way (let’s call it “regularization”). Recursion and circularity is possible in reasoning.
Truth isn’t attached to a specific hypothesis. Instead there’s a common “pool of truth”. Different hypotheses take different parts of the whole truth. The question isn’t “Is the hypothesis true?”, the question is “How true is the hypothesis compared to others?” And if the hypotheses are regularized it can’t be too wrong.
Alternatively: “implications” of a specific hypothesis aren’t attached to it. Instead there’s a common “pool of implications”. Different hypotheses take different parts of “implications”.
Conservation of implications: if implications of a hypothesis are simple enough, they remain true/likely even if the hypothesis is wrong. You can shift the implications to a different hypothesis, but you’re very unlikely to completely dissolve them.
In usual rationality (hypotheses don’t share truth) you try to get the most accurate opinions about every single thing in the world. You’re “greedy”. But in this approach (hypotheses do share truth) it doesn’t matter how wrong you are about everything unless you’re right about “the most important thing”. But once you’re proven right about “the most important thing”, you know everything. A billion wrongs can make a right. Because any wrong opinion is correlated with the ultimate true opinion, the pool of the entire truth.
You can’t prove a hypothesis to be “too bad” because it would harm all other hypotheses. Because all hypotheses are correlated, created by each other. When you keep proving something wrong the harm to other hypotheses grows exponentially.
Motivated reasoning is valid: truth of a hypothesis depends on context, on the range of interests you choose. Your choice affects the truth.
Any theory is the best (or even “the only one possible”) on its level of reality. For example, on a certain level of reality modern physics doesn’t explain weather better than gods of weather.
In a way it means that specific hypotheses/beliefs just don’t exist, they’re melted into a single landscape. It may sound insane (“everything is true at the same time and never proven wrong” and also relative!). But human language, emotions, learning, pattern-matching and research programs often work like this. It’s just a consequence of ideas (1) not being atomic statements about the world and (2) not being focused on causal reasoning, causal modeling. And it’s rational to not start with atomic predictions when you don’t have enough evidence to locate atomic hypotheses.
Causal rationality, Descriptive rationality
You can split rationality into 2 components. The second component isn’t explored. My idea describes the second component:
Causal rationality. Focused on atomic independent hypotheses about the world. On causal explanations, causal models. Answers “WHY this happens?”. Goal: to describe a specific reality in terms of outcomes.
Descriptive rationality. Focused on fuzzy and correlated hypotheses about the world. On patterns and analogies. Answers “HOW this happens?”. Goal: to describe all possible (and impossible) realities in terms of each other.
Causal and Descriptive rationality work according to different rules. Causal uses Bayesian updating. Descriptive uses “the common pool of properties + Bayesian updating”, maybe.
“Map is not the territory” is true for Causal rationality. It’s wrong for Descriptive rationality: every map is a layer of reality.
“Uncertainty and confusion is a part of the map, not the territory”. True for Causal rationality. Wrong for Descriptive rationality: the possibility of an uncertainty/confusion is a property of reality.
“Details make something less likely, not more” (Conjunction fallacy). True for Causal rationality. Wrong for Descriptive rationality: details are not true or false by themselves, they “host” kernels of truth, more details may accumulate more truth.
For Causal rationality, math is the ideal of specificity. For Descriptive rationality, math has nothing to do with specificity: an idea may have different specificity on different layers of reality.
In Causal rationality, hypotheses should constrain outcomes, shouldn’t explain any possible outcome. In Descriptive rationality… constraining depends on context.
Causal rationality often conflicts with people. Descriptive rationality tries to minimize the conflict. I believe it’s closer to how humans think.
Causal rationality assumes that describing reality is trivial and should be abandoned as soon as possible. Only (new) predictions matter.
In Descriptive rationality, a hypothesis is somewhat equivalent to the explained phenomenon. You can’t destroy a hypothesis too much without destroying your knowledge about the phenomenon itself. It’s like hitting a nail so hard that you destroy the Earth.
Example: Vitalism. It was proven wrong in causal terms. But in descriptive terms it’s almost entirely true. Living matter does behave very differently from non-living matter. Living matter does have a “force” that non-living matter doesn’t have (it’s just not a fundamental force). Many truths of vitalism were simply split into different branches of science: living matter is made out of special components (biology/microbiology) including nanomachines/computers!!! (DNA, genetics), can have cognition (psychology/neuroscience), can be a computer (computer science), can evolve (evolutionary biology), can do something like “decreasing entropy” (an idea by Erwin Schrödinger, see entropy and life). On the other hand, maybe it’s bad that vitalism got split into so many different pieces. Maybe it’s bad that vitalism failed to predict reductionism. However, behaviorism did get overshadowed by cognitive science (living matter did turn out to be more special than it could be). Our judgement of vitalism depends on our choices, but at worst vitalism is just the second best idea. Or the third best idea compared to some other version of itself… Absolute death of vitalism is astronomically unlikely and it would cause most of reductionism and causality to die too along with most of our knowledge about the world. Vitalism partially just restates our knowledge (“living matter is different from non-living”), so it’s strange to simply call it wrong. It’s easier to make vitalism better than to disprove it.
Perhaps you could call the old version of vitalism “too specific given the information about the world”: why should “life-like force” be beyond laws of physics? But even this would be debatable at the time. By the way, the old sentiment “Science is too weak to explain living things” can be considered partially confirmed: 19th century science lacked a bunch of conceptual breakthroughs. And “only organisms can make the components of living things” is partially just a fact of reality: skin and meat don’t randomly appear in nature. This fact was partially weakened, but also partially strengthened with time. The discovery of DNA strengthened it in some ways. It’s easy to overlook all of those things.
In Descriptive rationality, an idea is like a river. You can split it, but you can’t stop it. And it doesn’t make sense to fight the river with your fists: just let it flow around you. However, if you did manage to split the river into independent atoms, you get Causal rationality.
2 types of rationality should be connected
I think causal rationality has some problems and those problems show that it has a missing component:
Rationality is criticized for dealing with atomic hypotheses about the world. For not saying how to generate new hypotheses and obtain new knowledge. Example: critique by nostalgebraist. See “8. The problem of new ideas”
You can’t use causal rationality to be critical of causal rationality. In theory you should be able to do it, but in practice people often don’t do it. And causal rationality doesn’t model argumentation, even for the most important topics such as AI safety. So we end up arguing like anyone argues.
Doomsday argument, Pascal’s mugging. Probability starts to behave weird when we add large numbers of (irrelevant) things to our world.
The problem of modesty. Should you assume that you’re just an average person?
Weird addition in ethics. Repugnant conclusion, “Torture vs. Dust Specks”.
Causal rationality doesn’t give/justify an ethical theory. Doesn’t say how to find it if you want to find it.
Causal rationality doesn’t give/justify a decision theory. There’s a problem with logical uncertainty (uncertainty about implications of beliefs).
I’m not saying that all of this is impossible to solve with Causal rationality. I’m saying that Causal rationality doesn’t give any motivation to solve all of this. When you’re trying to solve it without motivation you kind of don’t know what you’re doing. It’s like trying to write a program in bytecode without having high-level concepts even in your mind. Or like trying to ride an alien device in the dark: you don’t know what you’re doing and you don’t know where you’re doing.
What and where are we doing when we’re trying to fix rationality?