Philosophy has long had the hope that eventually, somehow, it would find a set of elegant axioms from which the rest would regrow, like what happened in math. Several branches of philosophy think they did collapse it to a set of elegant axioms (though upon inspection, they actually let the complexity leak back in elsewhere). I think there’s a fear, not entirely unjustified, that if you let probabilistic reasoning into two many places then this closes off the possibility of reaching an axiomatization or of ever reaching firm conclusions about interesting questions. Today, it’s been long enough to know that the quest for axiomatization was doomed from the start—or at least, the quest for an axiomatization that wasn’t itself a probabilistic thing. So allowing probabilistic reasoning shouldn’t seem like a big scary concession anymore, but on the other hand, it’s still difficult and most philosophers aren’t dual-classed into maths.
Using personal preference or personal intuitions as priors instead of some objective measure along the lines of Solomonoff Induction
Unfortunately, Solomonoff Induction falls off the table as soon as the questions get interesting. As a next-best-thing, intuition is not all that bad. I’d criticize a lot of philosophy, not for grounding ideas in intuition, but for treating intuition as a black box rather than as something which can be studied and debugged and improved. Most LW-style philosophy does bottom out at intuition somewhere, it just does a better-than-usual job of patching over intuition’s weaknesses.
Moral realism
When you’re getting started on learning game theory, there is a point where it looks like it might be building towards an elegant theory of morality, something that would reproduce our moral intuitions and being a great Schelling point and ground morality really well. Then it runs into roadblocks and doesn’t get there, so we’re stuck with a hodgepodge metaethics where morality depends on an aggregation of many peoples’ preferences but there are different ways to aggregate one persons’ preferences and different ways to aggregate groups’ preferences and some preferences don’t count and it’s all very unsatisfying. But if you haven’t hit that wall yet or you’re very optimistic or you’re limiting yourself to sufficiently simple trolley problems, then moral realism seems like a thing.
Mathematical Platonism
This is a trap door into silly arguments about subtleties of the word “exist” which are cleanly and completely separated from all predictions. But if you want to engage with ideas like a mathematical multiverse, you do end up needing to think about subtleties of the word “exist”, and math ends up looking more fundamental than physics.
Libertarian free will (I’m looking for arguments other than those from religion)
I’m not sure what libertarian free will is in relation to the rest of the ideas about free will, but I find thinking about free will gets a lot easier if you first acknowledge that our intuitions are guided by the idea of ordinary freedom (ie, whether there’s a human around with a whip), and then go a step further and just think about ordinary freedom instead.
The view that there actually exist abstract “tables” and “chairs” and not just particles arranged into those forms
These ideas come back in slightly different forms when you start considering mathematical multiverses and low-fidelity simulations of the universe. For example, if you accept the simulation argument, and further suppose that the simulation would not be full-fidelity but would be designed to make this fact hard to notice, then you get the conclusion that certain abstract objects exist and their constituent particles don’t.
The existence of non-physical minds (I’m looking for arguments other than the argument from the Hard Problem of Consciousness)
The idea of minds as cognitive algorithms leads to something sort-of like this; in that framing, minds are physical objects with a dual existence in platonic math-realm that diverges if physics causes a deviation from the algorithm.
Philosophy has long had the hope that eventually, somehow, it would find a set of elegant axioms from which the rest would regrow, like what happened in math. Several branches of philosophy think they did collapse it to a set of elegant axioms (though upon inspection, they actually let the complexity leak back in elsewhere). I think there’s a fear, not entirely unjustified, that if you let probabilistic reasoning into two many places then this closes off the possibility of reaching an axiomatization or of ever reaching firm conclusions about interesting questions. Today, it’s been long enough to know that the quest for axiomatization was doomed from the start—or at least, the quest for an axiomatization that wasn’t itself a probabilistic thing. So allowing probabilistic reasoning shouldn’t seem like a big scary concession anymore, but on the other hand, it’s still difficult and most philosophers aren’t dual-classed into maths.
Unfortunately, Solomonoff Induction falls off the table as soon as the questions get interesting. As a next-best-thing, intuition is not all that bad. I’d criticize a lot of philosophy, not for grounding ideas in intuition, but for treating intuition as a black box rather than as something which can be studied and debugged and improved. Most LW-style philosophy does bottom out at intuition somewhere, it just does a better-than-usual job of patching over intuition’s weaknesses.
When you’re getting started on learning game theory, there is a point where it looks like it might be building towards an elegant theory of morality, something that would reproduce our moral intuitions and being a great Schelling point and ground morality really well. Then it runs into roadblocks and doesn’t get there, so we’re stuck with a hodgepodge metaethics where morality depends on an aggregation of many peoples’ preferences but there are different ways to aggregate one persons’ preferences and different ways to aggregate groups’ preferences and some preferences don’t count and it’s all very unsatisfying. But if you haven’t hit that wall yet or you’re very optimistic or you’re limiting yourself to sufficiently simple trolley problems, then moral realism seems like a thing.
This is a trap door into silly arguments about subtleties of the word “exist” which are cleanly and completely separated from all predictions. But if you want to engage with ideas like a mathematical multiverse, you do end up needing to think about subtleties of the word “exist”, and math ends up looking more fundamental than physics.
I’m not sure what libertarian free will is in relation to the rest of the ideas about free will, but I find thinking about free will gets a lot easier if you first acknowledge that our intuitions are guided by the idea of ordinary freedom (ie, whether there’s a human around with a whip), and then go a step further and just think about ordinary freedom instead.
These ideas come back in slightly different forms when you start considering mathematical multiverses and low-fidelity simulations of the universe. For example, if you accept the simulation argument, and further suppose that the simulation would not be full-fidelity but would be designed to make this fact hard to notice, then you get the conclusion that certain abstract objects exist and their constituent particles don’t.
The idea of minds as cognitive algorithms leads to something sort-of like this; in that framing, minds are physical objects with a dual existence in platonic math-realm that diverges if physics causes a deviation from the algorithm.