What if, as we approach the Singularity, it is provably or near-provably necessary to do unethical things like killing a few people or letting them die to avoid the worst of Singularity outcomes?
(I am not referring here to whether we may create non-Friendly AGI. I am referring to scenarios even before the AGI “takes over.”)
Such scenarios seem not impossible, and creates ethical dilemmas along the lines of what Yudkowsky mentions here.
What if, as we approach the Singularity, it is provably or near-provably necessary to do unethical things like killing a few people or letting them die to avoid the worst of Singularity outcomes?
(I am not referring here to whether we may create non-Friendly AGI. I am referring to scenarios even before the AGI “takes over.”)
Such scenarios seem not impossible, and creates ethical dilemmas along the lines of what Yudkowsky mentions here.