I emphatically do not want anything controlling the future of humanity, friendly or otherwise.
I find this concept of ‘controlling the future of humanity’ to be too vaguely defined. Let’s forget AIs for the moment and just talk about people, namely a hypothetical version of me. Let’s say I stumble across a vial of a bio-engineered virus that would destroy the whole of humanity if I release it into the air.
Am I controlling the future of humanity if I release the virus? Am I controlling the future of humanity if I destroy the virus in a safe manner? Am I controlling the future of humanity if I have the above decided by a coin-toss (heads I release, tails I destroy)? Am I controlling the future of humanity if I create an online internet poll and let the majority decide about the above? Am I controlling the future of humanity if I just leave the vial where I found it, and let the next random person that encounters it make the same decision as I did?
I want a say in my future and the part of the world I occupy. I do not want anything else making these decisions for me, even if it says it knows my preferences, and even still if it really does.
To answer your questions, yes, no, yes, yes, perhaps.
If your preference is that you should have as much decision-making ability for yourself as possible, why do you think that this preference wouldn’t be supported and even enhanced by an AI that was properly programmed to respect said preference?
e.g. would you be okay with an AI that defends your decision-making ability by defending humanity against those species of mind-enslaving extraterrestrials that are about to invade us? or e.g. by curing Alzheimer’s? Or e.g. by stopping that tsunami that by drowning you would have stopped you from having any further say in your future?
If your preference is that you should have as much decision-making ability for yourself as possible, why do you think that this preference wouldn’t be supported and even enhanced by an AI that was properly programmed to respect said preference?
Because it can’t do two things when only one choice is possible (e.g. save my child and the 1000 other children in this artificial scenario). You can design a utility function that tries to do a minimal amount of collateral damage, but you can’t make one which turns out rosy for everyone.
e.g. would you be okay with an AI that defends your decision-making ability by defending humanity against those species of mind-enslaving extraterrestrials that are about to invade us? or e.g. by curing Alzheimer’s? Or e.g. by stopping that tsunami that by drowning you would have stopped you from having any further say in your future?
That would not be the full extent of its action and the end of the story. You give it absolute power and a utility function that lets it use that power, it will eventually use it in some way that someone, somewhere considers abusive.
You can design a utility function that tries to do a minimal amount of collateral damage, but you can’t make one which turns out rosy for everyone
Yes, but this current world without an AI isn’t turning out rosy for everyone either.
That would not be the full extent of its action and the end of the story. You give it absolute power and a utility function that lets it use that power, it will eventually use it in some way that someone, somewhere considers abusive.
Sure, but there’s lots of abuse in the world without an AI also.
If you need specify the AI to be bad (“tyrannical”) in advance, that’s begging the question. We’re debating why you feel that any omni-powerful algorithm will necessarily be bad.
Look up the origin of the word tyrant, that is the sense in which I meant it, as a historical parallel (the first Athenian tyrants were actually well liked).
I find this concept of ‘controlling the future of humanity’ to be too vaguely defined. Let’s forget AIs for the moment and just talk about people, namely a hypothetical version of me. Let’s say I stumble across a vial of a bio-engineered virus that would destroy the whole of humanity if I release it into the air.
Am I controlling the future of humanity if I release the virus?
Am I controlling the future of humanity if I destroy the virus in a safe manner?
Am I controlling the future of humanity if I have the above decided by a coin-toss (heads I release, tails I destroy)?
Am I controlling the future of humanity if I create an online internet poll and let the majority decide about the above?
Am I controlling the future of humanity if I just leave the vial where I found it, and let the next random person that encounters it make the same decision as I did?
Yeah, this old post makes the same point.
I want a say in my future and the part of the world I occupy. I do not want anything else making these decisions for me, even if it says it knows my preferences, and even still if it really does.
To answer your questions, yes, no, yes, yes, perhaps.
If your preference is that you should have as much decision-making ability for yourself as possible, why do you think that this preference wouldn’t be supported and even enhanced by an AI that was properly programmed to respect said preference?
e.g. would you be okay with an AI that defends your decision-making ability by defending humanity against those species of mind-enslaving extraterrestrials that are about to invade us? or e.g. by curing Alzheimer’s? Or e.g. by stopping that tsunami that by drowning you would have stopped you from having any further say in your future?
Because it can’t do two things when only one choice is possible (e.g. save my child and the 1000 other children in this artificial scenario). You can design a utility function that tries to do a minimal amount of collateral damage, but you can’t make one which turns out rosy for everyone.
That would not be the full extent of its action and the end of the story. You give it absolute power and a utility function that lets it use that power, it will eventually use it in some way that someone, somewhere considers abusive.
Yes, but this current world without an AI isn’t turning out rosy for everyone either.
Sure, but there’s lots of abuse in the world without an AI also.
Replace “AI” with “omni-powerful tyrannical dictator” and tell me if you still agree with the outcome.
If you need specify the AI to be bad (“tyrannical”) in advance, that’s begging the question. We’re debating why you feel that any omni-powerful algorithm will necessarily be bad.
Look up the origin of the word tyrant, that is the sense in which I meant it, as a historical parallel (the first Athenian tyrants were actually well liked).