You can design a utility function that tries to do a minimal amount of collateral damage, but you can’t make one which turns out rosy for everyone
Yes, but this current world without an AI isn’t turning out rosy for everyone either.
That would not be the full extent of its action and the end of the story. You give it absolute power and a utility function that lets it use that power, it will eventually use it in some way that someone, somewhere considers abusive.
Sure, but there’s lots of abuse in the world without an AI also.
If you need specify the AI to be bad (“tyrannical”) in advance, that’s begging the question. We’re debating why you feel that any omni-powerful algorithm will necessarily be bad.
Look up the origin of the word tyrant, that is the sense in which I meant it, as a historical parallel (the first Athenian tyrants were actually well liked).
Yes, but this current world without an AI isn’t turning out rosy for everyone either.
Sure, but there’s lots of abuse in the world without an AI also.
Replace “AI” with “omni-powerful tyrannical dictator” and tell me if you still agree with the outcome.
If you need specify the AI to be bad (“tyrannical”) in advance, that’s begging the question. We’re debating why you feel that any omni-powerful algorithm will necessarily be bad.
Look up the origin of the word tyrant, that is the sense in which I meant it, as a historical parallel (the first Athenian tyrants were actually well liked).