The entire question is whether the same faculties that allow it to reason about intellectual tasks will also generalize to figuring out which are the right norms. If so, then if we accept that recognition that things are irrational can be motivating—which I argue for—then it will also act on the right norms.
I can see you’re taking a realist stance here. Let me see if I can take a different route that makes sense in terms of realism.
Let’s suppose there are moral facts and some norms are true while others are false. An intelligent AI can then determine which norms are true. Great!
Now we still have a problem, though: our AI hasn’t been programmed to follow true norms, only to discover them. Someone forgot to program that bit in. So now it knows what’s true, but it’s still going around doing bad things because no one made it care about following true norms.
This is the same situation as human psychopaths in a realist world: they may know what norms are true, they just don’t care and choose not to follow them. If you want to argue that AI will necessarily follow the true norms once it discovers them, you have to make an argument why, similarly, a human psychopath would start following true norms if they knew them, even though sort of by definition the point is that they could know true norms and ignore them anyway.
You need to somehow bind AI to care and follow true norms. I don’t see you making a case for this other than just waving your hands and saying it’ll do it because it’s true, but we have a proof by example that you can know true norms and just ignore them anyway if you want.
The entire question is whether the same faculties that allow it to reason about intellectual tasks will also generalize to figuring out which are the right norms. If so, then if we accept that recognition that things are irrational can be motivating—which I argue for—then it will also act on the right norms.
I can see you’re taking a realist stance here. Let me see if I can take a different route that makes sense in terms of realism.
Let’s suppose there are moral facts and some norms are true while others are false. An intelligent AI can then determine which norms are true. Great!
Now we still have a problem, though: our AI hasn’t been programmed to follow true norms, only to discover them. Someone forgot to program that bit in. So now it knows what’s true, but it’s still going around doing bad things because no one made it care about following true norms.
This is the same situation as human psychopaths in a realist world: they may know what norms are true, they just don’t care and choose not to follow them. If you want to argue that AI will necessarily follow the true norms once it discovers them, you have to make an argument why, similarly, a human psychopath would start following true norms if they knew them, even though sort of by definition the point is that they could know true norms and ignore them anyway.
You need to somehow bind AI to care and follow true norms. I don’t see you making a case for this other than just waving your hands and saying it’ll do it because it’s true, but we have a proof by example that you can know true norms and just ignore them anyway if you want.
IOW, moral norms being intrinsically motivating is a premise beyond them being objectively true.
Agreed, though I argue for it in the linked post.