Like: what happens if you read a book, or watch a documentary, or fall in love, or get some kind of indigestion – and then your heart is never exactly the same ever again, and not because of Reason, and then the only possible vector of non-trivial long-term value in this bleak and godless lightcone has been snuffed out?!
I’m finding it hard to parse this, perhaps someone can clarify for me. At first I assumed this was a problem inherent in the ‘naturalist’ view Scott Alexander gives:
“This is only a problem for ethical subjectivists like myself, who think that we’re doing something that has to do with what our conception of morality is. If you’re an ethical naturalist, by all means, just do the thing that’s actually ethical.”
E.g. Mr. negative utilitarian eats a taco and realises he ought to change his ethical views to something else, say, becoming classical utilitarian. If the later version was foomed, presumably this would be a disaster from the perspective of the earlier version.
But Joe gives broader examples: small, possibly imperceptible, changes as a result of random acts. These might be fully unconscious and non-rational—the indigestion example sticking out to me.
It feels a person whose actions followed from these changes, if foomed, would produce quite unpredictable, random futures unrelated necessarily to any particular ethical theory. This seems to be more like Scott’s ethical subjectivist worries—no matter how your messy-spaghetti morality is extrapolated, it will be unsatisfying to you (and everyone?), regardless of whether you were in the ‘right state’ at foom-time. I think Joe covers something similar in ‘On the Limits of Idealised Values.’
Perhaps to summarise the difference: extrapolating the latter ‘subjectivist’ position is like the influence of starting conditions on a projectile you are inside (‘please don’t fire!’); conversely, the naturalist view is like choosing a line on Scott’s subway route (just make sure you’re on the right line!).
My personal view on how to work with ethics as described in ethical philosophy terminology is basically a moral anti-realist version of ethical naturalism: I think the constraints of things like evolutionary psychology and sociology give us a lot of guidelines, in the context of a specific species (humans), society, and set of technological capabilities, so I’m a moral relativist, and that designing an ethical system that suits these guidelines well is exacting work. I just don’t think that they’re sufficient to uniquely define a single answer, so I don’t agree with the common moral-realist formulation of ethical naturalism. Perhaps I should start calling myself a moral semi-realist, and see how many philosophers I can confuse?
I’m finding it hard to parse this, perhaps someone can clarify for me. At first I assumed this was a problem inherent in the ‘naturalist’ view Scott Alexander gives:
E.g. Mr. negative utilitarian eats a taco and realises he ought to change his ethical views to something else, say, becoming classical utilitarian. If the later version was foomed, presumably this would be a disaster from the perspective of the earlier version.
But Joe gives broader examples: small, possibly imperceptible, changes as a result of random acts. These might be fully unconscious and non-rational—the indigestion example sticking out to me.
It feels a person whose actions followed from these changes, if foomed, would produce quite unpredictable, random futures unrelated necessarily to any particular ethical theory. This seems to be more like Scott’s ethical subjectivist worries—no matter how your messy-spaghetti morality is extrapolated, it will be unsatisfying to you (and everyone?), regardless of whether you were in the ‘right state’ at foom-time. I think Joe covers something similar in ‘On the Limits of Idealised Values.’
Perhaps to summarise the difference: extrapolating the latter ‘subjectivist’ position is like the influence of starting conditions on a projectile you are inside (‘please don’t fire!’); conversely, the naturalist view is like choosing a line on Scott’s subway route (just make sure you’re on the right line!).
Is this a useful framing?
My personal view on how to work with ethics as described in ethical philosophy terminology is basically a moral anti-realist version of ethical naturalism: I think the constraints of things like evolutionary psychology and sociology give us a lot of guidelines, in the context of a specific species (humans), society, and set of technological capabilities, so I’m a moral relativist, and that designing an ethical system that suits these guidelines well is exacting work. I just don’t think that they’re sufficient to uniquely define a single answer, so I don’t agree with the common moral-realist formulation of ethical naturalism. Perhaps I should start calling myself a moral semi-realist, and see how many philosophers I can confuse?