When people say that a morality is “objectively correct”, they generally don’t mean to imply that it is supported by “universally compelling arguments”. What they do mean might be a little hard to parse, and I’m not a moral realist and don’t claim to be able to pass their ITT, but in any case it seems to me that the burden of proof is on the one who claims that their position does imply heterogonality.
When people say that a morality is “objectively correct”, they generally don’t mean to imply that it is supported by ”universally compelling arguments“.
I think they do mean that quite a lot of the time, for non-srawman versions of “universally compelling”. I suppose what you a getting at objectively correct morality existing, in some sense, but being undiscoverable, or cognitively inaccessible.
That wouldn’t be covered by “internalism”. Whether any possible agent who hold a moral judgment is motivated to act on this judgment is orthogonal (no pun intended) to whether moral judgments are undiscoverable or cognitively inaccessible.
Also, it’s not clear that AI would reject the proposition that if there are objectively correct values, then it should update its value system to them, since humans don’t always.
Becasue?
When people say that a morality is “objectively correct”, they generally don’t mean to imply that it is supported by “universally compelling arguments”. What they do mean might be a little hard to parse, and I’m not a moral realist and don’t claim to be able to pass their ITT, but in any case it seems to me that the burden of proof is on the one who claims that their position does imply heterogonality.
I think they do mean that quite a lot of the time, for non-srawman versions of “universally compelling”. I suppose what you a getting at objectively correct morality existing, in some sense, but being undiscoverable, or cognitively inaccessible.
Sure, probably some of them mean that, but you can’t assume that they all do.
But then that would be covered by “internalism”.
That wouldn’t be covered by “internalism”. Whether any possible agent who hold a moral judgment is motivated to act on this judgment is orthogonal (no pun intended) to whether moral judgments are undiscoverable or cognitively inaccessible.
Arguably, AIs don’t have Omohundroan incentives to discover morality.
Whether it would believe it, and whether it would discover it are rather separate questions.
It can’t believe it if it doesn’t discover it.
It is possible to be told something.
Yes, this is my problem with this theory, but there are much stupider opinions held by some percentage of philosophers.
If only everyone could agree with what they are.
Also, it’s not clear that AI would reject the proposition that if there are objectively correct values, then it should update its value system to them, since humans don’t always.