This is SO different from my experiences and observations about people’s decisionmaking that I’m not sure how to adjust my models.
When someone encounters a (moral) decision in life, their choice often depends on their personal philosophy.
Disagree. Their choice is based on a very complex learning system, and their personal philosophy evolves to explain their choices. There probably _IS_ a feedback loop where repeated use of the justification encodes the decision weights more deeply, so the philosophy does appear to guide future choices. But causality definitely ain’t that simple.
But if your dominant belief is nihilism it will tell you that it doesn’t matter what you choose because there is no right or wrong answer
No. It will tell you that it doesn’t _objectively_ matter. There’s no right or wrong _outside of yourself_. That doesn’t mean there’s not preferable and dis-preferable choices, nor that you can’t have personal judgements about the likely results. Nihilism doesn’t deny consequentialism (nor support it; you can make choices based on your evaluation of yourself, rather than your evaluation of the consequences).
The uncertainty argument doesn’t work either—Pascal’s Wager fails because one can imagine a pull in every direction. There are an infinity of gods and moral theories, each of which having infinitesimal probability. Without evidence on which one(s) are more likely than others, they cancel out. If you don’t believe in any specific other moral stance, you do _not_ have to act like you do just because you might be wrong.
Disagree. Their choice is based on a very complex learning system, and their personal philosophy evolves to justify their choices. There probably _IS_ a feedback loop where repeated use of the justification encodes the decision weights more deeply. But causality ain’t that simple.
This is probably true, but that doesn’t mean you can’t say that they aren’t caused by their personal philosophy. Yes the learning system came first and caused the personal philosophy, but the big bang came even more first and caused the learning system. If I make a decision the outcome is based on: the big bang, my learning system and my personal philosophy. Saying that it is caused by my personal philosophy does not invalidate saying that it is caused by my learning system any more than saying that the last domino’s fall was caused by the second-to-last domino invalidates the argument that it was caused by the first.
That doesn’t mean there’s not preferable and dis-preferable choices, nor that you can’t have personal judgements about the likely results.
This is active nihilism aka existentialism which is distinctly different from classic moral nihilism, and thus requires different arguments (See my comments with jessicata in this thread)
There are an infinity of gods and moral theories, each of which having infinitesimal probability. Without evidence on which one(s) are more likely than others, they cancel out.
This seems like an argument against Bayes theorem in general: there are infinite theories so they cancel each-other out. The answer is of course that some theories are better than others. Some theories are contradicted by the evidence, some theories are fallacies, some theories are absorbed by other theories. The same is true for moral theories, so in practice they won’t be even. Plus you are only human, you’ll never learn every theory and must therefore try to figure out which theories work best with the small subsection you can hold in your brain.
This is SO different from my experiences and observations about people’s decisionmaking that I’m not sure how to adjust my models.
Disagree. Their choice is based on a very complex learning system, and their personal philosophy evolves to explain their choices. There probably _IS_ a feedback loop where repeated use of the justification encodes the decision weights more deeply, so the philosophy does appear to guide future choices. But causality definitely ain’t that simple.
No. It will tell you that it doesn’t _objectively_ matter. There’s no right or wrong _outside of yourself_. That doesn’t mean there’s not preferable and dis-preferable choices, nor that you can’t have personal judgements about the likely results. Nihilism doesn’t deny consequentialism (nor support it; you can make choices based on your evaluation of yourself, rather than your evaluation of the consequences).
The uncertainty argument doesn’t work either—Pascal’s Wager fails because one can imagine a pull in every direction. There are an infinity of gods and moral theories, each of which having infinitesimal probability. Without evidence on which one(s) are more likely than others, they cancel out. If you don’t believe in any specific other moral stance, you do _not_ have to act like you do just because you might be wrong.
This is probably true, but that doesn’t mean you can’t say that they aren’t caused by their personal philosophy. Yes the learning system came first and caused the personal philosophy, but the big bang came even more first and caused the learning system. If I make a decision the outcome is based on: the big bang, my learning system and my personal philosophy. Saying that it is caused by my personal philosophy does not invalidate saying that it is caused by my learning system any more than saying that the last domino’s fall was caused by the second-to-last domino invalidates the argument that it was caused by the first.
This is active nihilism aka existentialism which is distinctly different from classic moral nihilism, and thus requires different arguments (See my comments with jessicata in this thread)
This seems like an argument against Bayes theorem in general: there are infinite theories so they cancel each-other out. The answer is of course that some theories are better than others. Some theories are contradicted by the evidence, some theories are fallacies, some theories are absorbed by other theories. The same is true for moral theories, so in practice they won’t be even. Plus you are only human, you’ll never learn every theory and must therefore try to figure out which theories work best with the small subsection you can hold in your brain.
I’ll definitely argue against Bayes theorem being used to update on non-evidence made-up scenarios.