In what sense could philosophers have “better” philosophical intuition? The only way I can think of for theirs to be “better” is if they’ve seen a larger part of the landscape of philosophical questions, and are therefore better equipped to build consistent philosophical models (example).
The problem with this is that the kind of people likely to become philosophers have systematically different intuitions to begin with.
I’m not sure that randomness from evolution and enculturation should be treated differently from random factors in the intuition-squaring process. It’s randomness all the way through either way, right?
I fear many readers will confuse this argument for the moral anti-realist argument. The moral anti-realist argument doesn’t mean you shouldn’t consider your goals superior to those of the pebble sorters or babyeaters, just that if they ran the same process you did to arrive at this conclusion they would likely get a different result. This probably wouldn’t happen if you did this with the process used to try and establish say the value of the gravitational constant or the charge of an electron.
This suggests that morality is more like your particular taste in yummy foods and aversion to snakes than the speed of light. It isn’t a fact about the universe it is a fact about particular agents or pseudo-agents.
Of course the pebble sorters or babyeaters or paper-clip maximizing AIs can figure out we have an aversion to snakes and crave salty and sugary food. But them learning this would not result in them sharing our normative judgements except for instrumental purposes in some very constrained scenarios where they are optimal for a wide range of goals.
I fear many readers will confuse this argument for the moral anti-realist argument. The moral anti-realist argument doesn’t mean you shouldn’t consider your goals superior to those of the pebble sorters or babyeaters, just that if they ran the same process you did to arrive at this conclusion they would likely get a different result.
What is this “moral anti-realist argument”? Every argument against moral realism I’ve seen boils down to: “there are on universally compelling moral arguments, therefore morality is not objective”. Well, as the linked article points out, there are no universally compelling physical arguments either.
This suggests that morality is more like your particular taste in yummy foods and aversion to snakes than the speed of light.
The difference between morality and taste in food is that I’m ok with you believing that chocolate is tasty even if I don’t, but I’m not ok with you believing that it’s moral to eat babies.
The problem with this is that the kind of people likely to become philosophers have systematically different intuitions to begin with.
I fear many readers will confuse this argument for the moral anti-realist argument. The moral anti-realist argument doesn’t mean you shouldn’t consider your goals superior to those of the pebble sorters or babyeaters, just that if they ran the same process you did to arrive at this conclusion they would likely get a different result. This probably wouldn’t happen if you did this with the process used to try and establish say the value of the gravitational constant or the charge of an electron.
This suggests that morality is more like your particular taste in yummy foods and aversion to snakes than the speed of light. It isn’t a fact about the universe it is a fact about particular agents or pseudo-agents.
Of course the pebble sorters or babyeaters or paper-clip maximizing AIs can figure out we have an aversion to snakes and crave salty and sugary food. But them learning this would not result in them sharing our normative judgements except for instrumental purposes in some very constrained scenarios where they are optimal for a wide range of goals.
What is this “moral anti-realist argument”? Every argument against moral realism I’ve seen boils down to: “there are on universally compelling moral arguments, therefore morality is not objective”. Well, as the linked article points out, there are no universally compelling physical arguments either.
The difference between morality and taste in food is that I’m ok with you believing that chocolate is tasty even if I don’t, but I’m not ok with you believing that it’s moral to eat babies.
Interesting point, but where’s the problem?
Yep, I kind of wandered around.
I think I agree with the rest of your comment.
Reading philosophy as an aid in moral judgement.