Well, you say below you don’t believe that (in my words)
a FOOM’d self-modifying AI that cares about humanity’s CEV would likely do what you consider ‘right’ .
Specifically, you say
The AI would not do so, because it would not be programmed with correct beliefs about morality, in a way that evidence and logic could not fix.
You also say, in a different comment, you nevertheless believe this process
would produce an AI that gives very good answers.
Do you think humans can do better when it comes to AI? Do you think we can do better in philosophy? If you answer yes to the latter, would this involve stating clearly how we physical humans define ‘ought’?
Well, you say below you don’t believe that (in my words)
Specifically, you say
You also say, in a different comment, you nevertheless believe this process
Do you think humans can do better when it comes to AI? Do you think we can do better in philosophy? If you answer yes to the latter, would this involve stating clearly how we physical humans define ‘ought’?
Did I misread you? I meant to say:
a FOOM’d self-modifying AI would not likely do what I consider ‘right’ .
a FOOM’d self-modifying AI that cares about humanity’s CEV would likely do what I consider ‘right’ .
I probably misread you.