No. it’s much wider than that. There are rational and instrumental should’s.
ETA:
here’s no reason to expect an arbitrary mind to be compelled by ethics.
Depends how arbitrary. Many philosophers think a rational mind could be compelled by ethical arguments...that ethical-should can be built out of rational-should.
There’s no reason to expect an arbitrary mind to be compelled by ethics.
As one should not expect an arbitrary mind with its own notions of “right” or “wrong” to yield to any human’s proselytizing about objectively correct ethics, “murder is bad”, and trying to provide a “correct” solution for that arbitrary mind to adopt.
The ethics as defined by China, or an arbitrary mind, have as much claim to be correct as ours. There is no axiom-free metaethical framework which would provide the “should” in “you should choose ethics211412312312″, that was my point. Calling some church’s (or other group’s) ethical doctrine objectively correct for all minds doesn’t make a dint of difference, and doesn’t go beyond “my ethics are right! no, mine are!”
As one should not expect an arbitrary mind with its own notions of “right” or “wrong” to yield to any human’s proselytizing about objectively correct ethics, “murder is bad”, and trying to provide a “correct” solution for that arbitrary mind to adopt.
But humans can proselytise each other, despite their different notions of right and wrong. You seem to be assuming that morally-rght and -wrong are fundamentals. But if they are outcomes of reasoning and facts, then they can be changed by the presentation of better reasoning and previously unknown facts. As happens when one person morally exhorts another. I think you need to assume that your arbitrary mind has nothing in common with a human one, not even rationality.
But if they are outcomes of reasoning and facts, then they can be changed by the presentation of better reasoning (...) I think you need to assume that your arbitrary mind has nothing in common with a human one, not even rationality
Does that mean that, in your opinion, if we constructed an AI mind that uses a rational reasoning mechanism (such as Bayes), we wouldn’t need to worry since we could persuade it to act morally correct?
I’m not sure if that is necessarily true, or even highly likely. But it is a possibility which is extensively discussed in non-LW philosophy that is standardly ignored or bypassed on LW for some reason. As per my original comment. Is moral relativism really just obviously true?
Depends on how you define “moral relativism”. Kawomba thinks a particularly strong version is obviously true, but I think the LW consensus is that a weak version is.
The ethics as defined by China, or an arbitrary mind, have as much claim to be correct as ours.
If someone defines ethics differently, then WHAT are the common characteristics that makes you call them both “ethics”? You surely don’t mean that they just happened to use the same sound or the same letters and that they may be meaning basketball instead? So there must already exist some common elements you are thinking of that make both versions be logically categorizable as “ethics”.
What are those common elements?
What would it mean for an alien to e.g. define “tetration” differently than we do? Either they define it in the same way, or they haven’t defined it at all. To define it differently means that they’re not describing what we mean by tetration at all.
“Should” is an ethical word. To use your (rather misleading) naming convention, it refers to a component of ethics211412312312.
Of course one should not confuse this with “would”. There’s no reason to expect an arbitrary mind to be compelled by ethics.
No. it’s much wider than that. There are rational and instrumental should’s.
ETA:
Depends how arbitrary. Many philosophers think a rational mind could be compelled by ethical arguments...that ethical-should can be built out of rational-should.
As one should not expect an arbitrary mind with its own notions of “right” or “wrong” to yield to any human’s proselytizing about objectively correct ethics, “murder is bad”, and trying to provide a “correct” solution for that arbitrary mind to adopt.
The ethics as defined by China, or an arbitrary mind, have as much claim to be correct as ours. There is no axiom-free metaethical framework which would provide the “should” in “you should choose ethics211412312312″, that was my point. Calling some church’s (or other group’s) ethical doctrine objectively correct for all minds doesn’t make a dint of difference, and doesn’t go beyond “my ethics are right! no, mine are!”
But humans can proselytise each other, despite their different notions of right and wrong. You seem to be assuming that morally-rght and -wrong are fundamentals. But if they are outcomes of reasoning and facts, then they can be changed by the presentation of better reasoning and previously unknown facts. As happens when one person morally exhorts another. I think you need to assume that your arbitrary mind has nothing in common with a human one, not even rationality.
Does that mean that, in your opinion, if we constructed an AI mind that uses a rational reasoning mechanism (such as Bayes), we wouldn’t need to worry since we could persuade it to act morally correct?
I’m not sure if that is necessarily true, or even highly likely. But it is a possibility which is extensively discussed in non-LW philosophy that is standardly ignored or bypassed on LW for some reason. As per my original comment. Is moral relativism really just obviously true?
Depends on how you define “moral relativism”. Kawomba thinks a particularly strong version is obviously true, but I think the LW consensus is that a weak version is.
I don’t think there is a consensus, just a belief in a consensus. EY seems unable or unwiing to clarify his posiition even when asked directly.
If someone defines ethics differently, then WHAT are the common characteristics that makes you call them both “ethics”? You surely don’t mean that they just happened to use the same sound or the same letters and that they may be meaning basketball instead? So there must already exist some common elements you are thinking of that make both versions be logically categorizable as “ethics”.
What are those common elements?
What would it mean for an alien to e.g. define “tetration” differently than we do? Either they define it in the same way, or they haven’t defined it at all. To define it differently means that they’re not describing what we mean by tetration at all.