Other: This is a not-very-interesting definitional question as to exactly which kind of mental states should be counted as “sincerely making a moral judgement”.
General defense of the above type of reply: Voting “Other” on questions that seem to you confused or seem to turn on irrelevant matters of small definitions, rather than making up a definition and running with it, etcetera, is probably a good barometer of LW-vs.-philosophy opinion.
The subject matter of humanity::morality is a mathematical object which Clippy could calculate, if it ever had any reason to do so, which it wouldn’t, but it could, without being at all motivated to do anything about that. However, if “morality” is being given an agent relative definition then no, whatever you’re not motivated to do anything about, even in the slightest, doesn’t seem like it should be called Alejandro::morality.
Voting “Other” on questions that seem to you confused or seem to turn on irrelevant matters of small definitions, rather than making up a definition and running with it, etcetera, is probably a good barometer of LW-vs.-philosophy opinion.
I doubt it. In my experience, if you allow a “Please specify” answer, philosophers will pick that for practically any distinction.
Voting “Other” on questions that seem to you confused or seem to turn on irrelevant matters of small definitions, rather than making up a definition and running with it, etcetera, is probably a good barometer of LW-vs.-philosophy opinion.
It is, at any rate, if you have some evidence that philosophers (professional? historical? what?) make up definitions and run with them when they don’t understand a question.
Otherwise, it’s probably a very, very bad barometer.
Other: This is a not-very-interesting definitional question as to exactly which kind of mental states should be counted as “sincerely making a moral judgement”.
General defense of the above type of reply: Voting “Other” on questions that seem to you confused or seem to turn on irrelevant matters of small definitions, rather than making up a definition and running with it, etcetera, is probably a good barometer of LW-vs.-philosophy opinion.
The subject matter of humanity::morality is a mathematical object which Clippy could calculate, if it ever had any reason to do so, which it wouldn’t, but it could, without being at all motivated to do anything about that. However, if “morality” is being given an agent relative definition then no, whatever you’re not motivated to do anything about, even in the slightest, doesn’t seem like it should be called Alejandro::morality.
I doubt it. In my experience, if you allow a “Please specify” answer, philosophers will pick that for practically any distinction.
It is, at any rate, if you have some evidence that philosophers (professional? historical? what?) make up definitions and run with them when they don’t understand a question.
Otherwise, it’s probably a very, very bad barometer.
I agree with this, even though I voted “externalism”.