If your preference is that you should have as much decision-making ability for yourself as possible, why do you think that this preference wouldn’t be supported and even enhanced by an AI that was properly programmed to respect said preference?
Because it can’t do two things when only one choice is possible (e.g. save my child and the 1000 other children in this artificial scenario). You can design a utility function that tries to do a minimal amount of collateral damage, but you can’t make one which turns out rosy for everyone.
e.g. would you be okay with an AI that defends your decision-making ability by defending humanity against those species of mind-enslaving extraterrestrials that are about to invade us? or e.g. by curing Alzheimer’s? Or e.g. by stopping that tsunami that by drowning you would have stopped you from having any further say in your future?
That would not be the full extent of its action and the end of the story. You give it absolute power and a utility function that lets it use that power, it will eventually use it in some way that someone, somewhere considers abusive.
You can design a utility function that tries to do a minimal amount of collateral damage, but you can’t make one which turns out rosy for everyone
Yes, but this current world without an AI isn’t turning out rosy for everyone either.
That would not be the full extent of its action and the end of the story. You give it absolute power and a utility function that lets it use that power, it will eventually use it in some way that someone, somewhere considers abusive.
Sure, but there’s lots of abuse in the world without an AI also.
If you need specify the AI to be bad (“tyrannical”) in advance, that’s begging the question. We’re debating why you feel that any omni-powerful algorithm will necessarily be bad.
Look up the origin of the word tyrant, that is the sense in which I meant it, as a historical parallel (the first Athenian tyrants were actually well liked).
Because it can’t do two things when only one choice is possible (e.g. save my child and the 1000 other children in this artificial scenario). You can design a utility function that tries to do a minimal amount of collateral damage, but you can’t make one which turns out rosy for everyone.
That would not be the full extent of its action and the end of the story. You give it absolute power and a utility function that lets it use that power, it will eventually use it in some way that someone, somewhere considers abusive.
Yes, but this current world without an AI isn’t turning out rosy for everyone either.
Sure, but there’s lots of abuse in the world without an AI also.
Replace “AI” with “omni-powerful tyrannical dictator” and tell me if you still agree with the outcome.
If you need specify the AI to be bad (“tyrannical”) in advance, that’s begging the question. We’re debating why you feel that any omni-powerful algorithm will necessarily be bad.
Look up the origin of the word tyrant, that is the sense in which I meant it, as a historical parallel (the first Athenian tyrants were actually well liked).