Survey complete. Had to answer “there’s no such thing as morality” because I can’t imagine a configuration of quarks that would make any of the other choices true. What would it even mean at a low level for one normative theory to be “correct?”
A quark, or a configuration of quarks, or definable in terms of configurations of quarks. Presumably occlude really meant (or perhaps would have meant, given more knowledge of physics) “elementary particles”, since not all elementary particles are quarks; or something more complicated involving quantum fields. With such fixes in place, it doesn’t seem to me like a fully-general argument against (for instance) computers or people or minds or symphonies, but it still has some force against moral realism.
That particular turn of phrase (configuration of quarks) was borrowed from Eliezer’s description of reductionism in Luke’s “Pale Blue Dot” podcast #88. It left an impression.
Yeah, but to be flip, what does “agree” mean? What position you find most intellectually coherent? What you use to regulate your own behavior? What you use to form social judgments of behavior? I put down “consequentialism,” but I could have put down “virtue ethics” or “there’s no such thing as morality” if I were using a different frame.
I favored “no such thing as morality” in the sense that I don’t think I can tell somebody else what to do on the basis of it being wrong or right.
But since I am willing to kill people who act in a way sufficiently contrary to my own preferences, and my own preferences are consequential, I chose Consequentialism on the survey.
Actually, you can tell someone what to do on the basis of it being “wrong” or “right”; the only requirement is that their morality/preferences are similar to your own. If you can convince them that their actions are contrary to their own moral preferences, you could manage to convince them to do that which you both consider to be “right”.
But, if you meant that it is impossible to determine what someone should do by means of a universal set of moral rules, then yea, clearly not. But the absence of a universal morality does not imply an absence of all morality.
That’s not the question. The question is which ideology you most identify with. So what you answered is “The philosophy I most identify with is that there is no such thing as morality.” This seems like a nonsensical position since it would imply that concepts don’t exist simply because they aren’t physical. Morality is a very real part of the universe as it can be observed in the functioning of the human brain.
Admittedly, I did find the question somewhat odd, as what is asked is what I most identify with, and it’s a very bad habit to make ideologies part of your identity. I interpreted the question as “which form of morality do you approve of the most”, which for me was consequentialism since out of those three I believe it to be the most effective tool for improving human welfare.
I interpreted the question as “which form of morality do you approve of the most”, which for me was consequentialism since out of those three I believe it to be the most effective tool for improving human welfare.
You also judged the alternatives on consequentialist grounds. I interpreted the question as “which form of morality do you use to decide what to do (or wish you used to decide what to do)?”
Good catch! I should have added “and improving human welfare is more important to me than any other considerations”.
Anyway, I think morality is more than just “how do you decide what to do”, it’s about what you feel people in general should do. And in that case I would prefer everyone to use consequentialism, even though that isn’t strictly how I make my own decisions.
Morality is a very real part of the universe as it can be observed in the functioning of the human brain.
I try, of late, not to create sections of map that don’t correspond to any territory. What if we taboo the word morality? Is there brain function that corresponds to morality and that is distinct from preferences, beliefs, emotions, and goals? It seems that positing the existence of something called morality creates something additional and unnecessary.
It does correspond to territory: that specific functioning of the human brain. Human preferences are not part of the map, they’re part of the territory. Admittedly, you can describe the same thing using different words, but that’s true for everything. Morality is a subset of preferences in that it only covers those preferences that describe how intelligent agents should act. It is still a useful term for that reason.
I have found however that talk of morality leads to enormous amounts of confusion (fake agreements, fake disagreements, etc.) and so I agree that tabooing the word and substituting the intended meaning has a great deal of merit.
Survey complete. Had to answer “there’s no such thing as morality” because I can’t imagine a configuration of quarks that would make any of the other choices true. What would it even mean at a low level for one normative theory to be “correct?”
That’s a fully-general argument against the existence of anything that isn’t a quark.
A quark, or a configuration of quarks, or definable in terms of configurations of quarks. Presumably occlude really meant (or perhaps would have meant, given more knowledge of physics) “elementary particles”, since not all elementary particles are quarks; or something more complicated involving quantum fields. With such fixes in place, it doesn’t seem to me like a fully-general argument against (for instance) computers or people or minds or symphonies, but it still has some force against moral realism.
That particular turn of phrase (configuration of quarks) was borrowed from Eliezer’s description of reductionism in Luke’s “Pale Blue Dot” podcast #88. It left an impression.
Calling them correct/incorrect is just a convention for saying you agree with them.
Yeah, but to be flip, what does “agree” mean? What position you find most intellectually coherent? What you use to regulate your own behavior? What you use to form social judgments of behavior? I put down “consequentialism,” but I could have put down “virtue ethics” or “there’s no such thing as morality” if I were using a different frame.
I favored “no such thing as morality” in the sense that I don’t think I can tell somebody else what to do on the basis of it being wrong or right.
But since I am willing to kill people who act in a way sufficiently contrary to my own preferences, and my own preferences are consequential, I chose Consequentialism on the survey.
Actually, you can tell someone what to do on the basis of it being “wrong” or “right”; the only requirement is that their morality/preferences are similar to your own. If you can convince them that their actions are contrary to their own moral preferences, you could manage to convince them to do that which you both consider to be “right”.
But, if you meant that it is impossible to determine what someone should do by means of a universal set of moral rules, then yea, clearly not. But the absence of a universal morality does not imply an absence of all morality.
That’s not the question. The question is which ideology you most identify with. So what you answered is “The philosophy I most identify with is that there is no such thing as morality.” This seems like a nonsensical position since it would imply that concepts don’t exist simply because they aren’t physical. Morality is a very real part of the universe as it can be observed in the functioning of the human brain.
Admittedly, I did find the question somewhat odd, as what is asked is what I most identify with, and it’s a very bad habit to make ideologies part of your identity. I interpreted the question as “which form of morality do you approve of the most”, which for me was consequentialism since out of those three I believe it to be the most effective tool for improving human welfare.
You also judged the alternatives on consequentialist grounds. I interpreted the question as “which form of morality do you use to decide what to do (or wish you used to decide what to do)?”
Good catch! I should have added “and improving human welfare is more important to me than any other considerations”.
Anyway, I think morality is more than just “how do you decide what to do”, it’s about what you feel people in general should do. And in that case I would prefer everyone to use consequentialism, even though that isn’t strictly how I make my own decisions.
I try, of late, not to create sections of map that don’t correspond to any territory. What if we taboo the word morality? Is there brain function that corresponds to morality and that is distinct from preferences, beliefs, emotions, and goals? It seems that positing the existence of something called morality creates something additional and unnecessary.
It does correspond to territory: that specific functioning of the human brain. Human preferences are not part of the map, they’re part of the territory. Admittedly, you can describe the same thing using different words, but that’s true for everything. Morality is a subset of preferences in that it only covers those preferences that describe how intelligent agents should act. It is still a useful term for that reason.
I have found however that talk of morality leads to enormous amounts of confusion (fake agreements, fake disagreements, etc.) and so I agree that tabooing the word and substituting the intended meaning has a great deal of merit.
I agree with your argument in the sense that you meant it, though I interpreted the question differently.