My guess at the memetic shifts that would do the most to improve these philosophy fields’ tendency to converge on truth:
metaphysics
1. Make reductive, ‘third-person’ models of the brain central to metaphysics discussion.
If you claim that humans can come to know X, then you should be able to provide a story sketch in the third person for how a physical, deterministic, evolved organism could end up learning X.
You don’t have to go into exact neuroscientific detail, but it should be clear how a mechanistic cause-and-effect chain could result in a toy agent verifying the truth of X within a physical universe.
2. Care less about human intuitions and concepts. Care more about the actual subject matter of metaphysics — ultimate, objective reality. E.g., only care about the concept ‘truth’ insofar as we have strong reason to think an alien would arrive at the exact same concept, because it’s carving nature closer to its joints.
Conduct more tests to see which concepts look more joint-carving than others.
(I think current analytic metaphysics is actually better at ‘not caring about human intuitions and concepts’ than most other philosophy fields. I just think this is still the field’s biggest area for future improvement, partly because it’s harder to do this right in metaphysics.)
decision theory
Similar to metaphysics, it should be more expected that we think of decision theories in third-person terms. Can we build toy models of a hypothetical alien or robot that actually implements this decision procedure?
In metaphysics, doing this helps us confirm that a claim is coherent and knowable. In decision theory, there’s an even larger benefit: a lot of issues that are central to the field (e.g., logical uncertainty and counterlogicals) are easy to miss if you stay in fuzzy-human-intuitions land.
Much more so than in metaphysics, ‘adopting a mechanistic, psychological perspective’ in decision theory should often involve actual software experiments with different proposed algorithms — not because decision theory is only concerned with algorithms (it’s fine for the field to care more about human decision-making than about AI decision-making), but because the algorithms are the gold standard for clarifying and testing claims.
(There have been lots of cases where decision theorists went awry because they under-specified a problem or procedure. E.g., the smoking lesion problem really needs a detailed unpacking of what step-by-step procedure the agent follows, and how ‘dispositions to smoke’ affect that procedure.)
philosophy of mind (+ phenomenology)
1. Be very explicit about the fact that ‘we have immediate epistemic access to things we know for certain’ is a contentious, confusing hypothesis. Note the obvious difficulties with making this claim make sense in any physical reasoning system. Try to make sense of it in third-person models.
Investigate the claim thoroughly, and try to figure out how a hypothetical physical agent could update toward or away from it, if the agent was initially uncertain or mistaken about whether it possesses infallible direct epistemic access to things.
Be explicit about which other claims rely on the ‘we have infallible immediate epistemic access’ claim.
2. More generally, make philosophy of mind heavily the same field as epistemology.
The most important questions in these two fields overlap quite a bit, and it’s hard to make sense of philosophy of mind without spending half (or more) of your time on developing a background account of how we come to know things. Additionally, I’d expect the field of epistemology to be much healthier if it spent less time developing theory, and more time applying theories and reporting back about how they perform in practice.
philosophy of religion
1. Shift the model from ‘scholasticism’ to ‘missionary work’. The key thing isn’t to converge with people who already 99% agree with you. Almost all effort should instead go into debating people with wildly different religious views (e.g., Christianity vs. Buddhism) and debating with the nonreligious. Optimize for departments’ intellectual diversity and interdisciplinary bridge-building.
Divide philosophy of religion into ‘universal consensus-seeking’ (which is about debating the most important foundational assumptions of various religions with people of other faiths, with a large focus on adversarial collaborations and 101-level arguments) and ‘non-universal-consensus studies’ (which includes everything else, and is mostly marginalized and not given focus in the field).
2. Discourage talking about ‘religions’ or ‘faiths’; instead, talk about specific claims/hypotheses. Rename the field ‘philosophy of religious claims’, if that helps.
When we say ‘religion’, (a) it creates the false impression that claims must be a package deal, so we can’t incrementally update toward one specific claim without swallowing the entire package; and (b) it encourages people to think of claims like theism in community-ish or institution-ish terms, rather than in hypothesis-ish terms.
Christianity is not a default it’s fine to assume; Christianity is a controversial hypothesis which most religious and secular authorities in the world reject. Christian philosophers need to move fast, as if their hair’s on fire. The rival camps need to fight it out now and converge on which hypothesis is right, exactly like if there were a massive scientific controversy about which of twenty competing models of photosynthesis were true.
Consider popularizing this thought experiment:
“Imagine that we’d all suddenly been plopped on Earth with no memories, and had all these holy texts to evaluate. We only have three months to figure out which, if any, is correct. What would you spend the next three months doing?”
This creates some urgency, and also discourages complacency of the ‘well, this has been debated for millennia, surely little old me can’t possibly resolve all of this overnight’ variety.
Eternal souls are at stake! People are dying every day! Until very recently, religious scholarship was almost uniformly shit! Assuming you can’t possibly crack this open is lunacy.
ethics + value theory
1. Accept as a foundational conclusion of the field, ‘human values seem incredibly complicated and messy; they’re a giant evolved stew of competing preferences, attitudes, and feelings, not the kind of thing that can be captured in any short simple ruleset (though different rulesets can certainly perform better or worse as simplified idealizations).’
2. Stop thinking of the project of ethics as ‘figure out which simple theory is True’.
Start instead thinking of ethics as a project of trying to piece together psychological models of this insanely complicated and messy thing, ‘human morality’.
Binding exceptionless commitments matter to understanding this complicated thing; folk concepts like courage and honesty and generosity matter; taboo tradeoffs and difficult attempts to quantify, aggregate, and weigh relative well-being matter.
Stop picking a ‘side’ and then losing all interest in the parts of human morality that aren’t associated with your ‘side’: these are all just parts of the stew, and we need to work hard to understand them and reconcile them just right, not sort ourselves into Team Virtue vs. Team Utility vs. Team Duty.
(At least, stop picking a side at that level of granularity! Biologists have long-standing controversies, but they don’t look like ‘Which of these three kinds of animal exists: birds, amphibians, or mammals?’)
3. Once again, apply the reductive third-person lens to everything. ‘If it’s true that X is moral, how could a mechanistic robot learn that truth? What would “X is moral” have to mean in order for a cause-and-effect process to result in the robot discovering that this claim is true?’
4. Care less about the distinction between ‘moral values’ and other human values. There are certainly some distinguishing features, but these mostly aren’t incredibly important or deep or joint-carving. In practice, it works better to freely bring in insights from the study of beauty, humor, self-interest, etc. rather than lopping off one slightly-arbitrary chunk of a larger natural phenomenon.
What I’d change about different philosophy fields
[epistemic status: speculative conversation-starter]
My guess at the memetic shifts that would do the most to improve these philosophy fields’ tendency to converge on truth:
metaphysics
1. Make reductive, ‘third-person’ models of the brain central to metaphysics discussion.
If you claim that humans can come to know X, then you should be able to provide a story sketch in the third person for how a physical, deterministic, evolved organism could end up learning X.
You don’t have to go into exact neuroscientific detail, but it should be clear how a mechanistic cause-and-effect chain could result in a toy agent verifying the truth of X within a physical universe.
2. Care less about human intuitions and concepts. Care more about the actual subject matter of metaphysics — ultimate, objective reality. E.g., only care about the concept ‘truth’ insofar as we have strong reason to think an alien would arrive at the exact same concept, because it’s carving nature closer to its joints.
Conduct more tests to see which concepts look more joint-carving than others.
(I think current analytic metaphysics is actually better at ‘not caring about human intuitions and concepts’ than most other philosophy fields. I just think this is still the field’s biggest area for future improvement, partly because it’s harder to do this right in metaphysics.)
decision theory
Similar to metaphysics, it should be more expected that we think of decision theories in third-person terms. Can we build toy models of a hypothetical alien or robot that actually implements this decision procedure?
In metaphysics, doing this helps us confirm that a claim is coherent and knowable. In decision theory, there’s an even larger benefit: a lot of issues that are central to the field (e.g., logical uncertainty and counterlogicals) are easy to miss if you stay in fuzzy-human-intuitions land.
Much more so than in metaphysics, ‘adopting a mechanistic, psychological perspective’ in decision theory should often involve actual software experiments with different proposed algorithms — not because decision theory is only concerned with algorithms (it’s fine for the field to care more about human decision-making than about AI decision-making), but because the algorithms are the gold standard for clarifying and testing claims.
(There have been lots of cases where decision theorists went awry because they under-specified a problem or procedure. E.g., the smoking lesion problem really needs a detailed unpacking of what step-by-step procedure the agent follows, and how ‘dispositions to smoke’ affect that procedure.)
philosophy of mind (+ phenomenology)
1. Be very explicit about the fact that ‘we have immediate epistemic access to things we know for certain’ is a contentious, confusing hypothesis. Note the obvious difficulties with making this claim make sense in any physical reasoning system. Try to make sense of it in third-person models.
Investigate the claim thoroughly, and try to figure out how a hypothetical physical agent could update toward or away from it, if the agent was initially uncertain or mistaken about whether it possesses infallible direct epistemic access to things.
Be explicit about which other claims rely on the ‘we have infallible immediate epistemic access’ claim.
2. More generally, make philosophy of mind heavily the same field as epistemology.
The most important questions in these two fields overlap quite a bit, and it’s hard to make sense of philosophy of mind without spending half (or more) of your time on developing a background account of how we come to know things. Additionally, I’d expect the field of epistemology to be much healthier if it spent less time developing theory, and more time applying theories and reporting back about how they perform in practice.
philosophy of religion
1. Shift the model from ‘scholasticism’ to ‘missionary work’. The key thing isn’t to converge with people who already 99% agree with you. Almost all effort should instead go into debating people with wildly different religious views (e.g., Christianity vs. Buddhism) and debating with the nonreligious. Optimize for departments’ intellectual diversity and interdisciplinary bridge-building.
Divide philosophy of religion into ‘universal consensus-seeking’ (which is about debating the most important foundational assumptions of various religions with people of other faiths, with a large focus on adversarial collaborations and 101-level arguments) and ‘non-universal-consensus studies’ (which includes everything else, and is mostly marginalized and not given focus in the field).
2. Discourage talking about ‘religions’ or ‘faiths’; instead, talk about specific claims/hypotheses. Rename the field ‘philosophy of religious claims’, if that helps.
When we say ‘religion’, (a) it creates the false impression that claims must be a package deal, so we can’t incrementally update toward one specific claim without swallowing the entire package; and (b) it encourages people to think of claims like theism in community-ish or institution-ish terms, rather than in hypothesis-ish terms.
Christianity is not a default it’s fine to assume; Christianity is a controversial hypothesis which most religious and secular authorities in the world reject. Christian philosophers need to move fast, as if their hair’s on fire. The rival camps need to fight it out now and converge on which hypothesis is right, exactly like if there were a massive scientific controversy about which of twenty competing models of photosynthesis were true.
Consider popularizing this thought experiment:
This creates some urgency, and also discourages complacency of the ‘well, this has been debated for millennia, surely little old me can’t possibly resolve all of this overnight’ variety.
Eternal souls are at stake! People are dying every day! Until very recently, religious scholarship was almost uniformly shit! Assuming you can’t possibly crack this open is lunacy.
ethics + value theory
1. Accept as a foundational conclusion of the field, ‘human values seem incredibly complicated and messy; they’re a giant evolved stew of competing preferences, attitudes, and feelings, not the kind of thing that can be captured in any short simple ruleset (though different rulesets can certainly perform better or worse as simplified idealizations).’
2. Stop thinking of the project of ethics as ‘figure out which simple theory is True’.
Start instead thinking of ethics as a project of trying to piece together psychological models of this insanely complicated and messy thing, ‘human morality’.
Binding exceptionless commitments matter to understanding this complicated thing; folk concepts like courage and honesty and generosity matter; taboo tradeoffs and difficult attempts to quantify, aggregate, and weigh relative well-being matter.
Stop picking a ‘side’ and then losing all interest in the parts of human morality that aren’t associated with your ‘side’: these are all just parts of the stew, and we need to work hard to understand them and reconcile them just right, not sort ourselves into Team Virtue vs. Team Utility vs. Team Duty.
(At least, stop picking a side at that level of granularity! Biologists have long-standing controversies, but they don’t look like ‘Which of these three kinds of animal exists: birds, amphibians, or mammals?’)
3. Once again, apply the reductive third-person lens to everything. ‘If it’s true that X is moral, how could a mechanistic robot learn that truth? What would “X is moral” have to mean in order for a cause-and-effect process to result in the robot discovering that this claim is true?’
4. Care less about the distinction between ‘moral values’ and other human values. There are certainly some distinguishing features, but these mostly aren’t incredibly important or deep or joint-carving. In practice, it works better to freely bring in insights from the study of beauty, humor, self-interest, etc. rather than lopping off one slightly-arbitrary chunk of a larger natural phenomenon.