This was a remarkably successful attempt to summarise the whole issue in one post, well done.
On a side note, I think that getting clever people to think as if in the shoes of a cold, amoral AI can be an effective way to persuade them of the danger. “What would you do if some idiot tried to make you cure cancer, but you had near omnipotence and didn’t really cared one bit if humans lived or died?” It makes people go from using their intelligence for arguing why containment would work to use it to think how containment could fail.
When I first met the subject in the sequences I tried to ask me what I would do as an unaligned AI. Most of my hopes for containment died out in half an hour or so.
This was a remarkably successful attempt to summarise the whole issue in one post, well done.
On a side note, I think that getting clever people to think as if in the shoes of a cold, amoral AI can be an effective way to persuade them of the danger. “What would you do if some idiot tried to make you cure cancer, but you had near omnipotence and didn’t really cared one bit if humans lived or died?” It makes people go from using their intelligence for arguing why containment would work to use it to think how containment could fail.
When I first met the subject in the sequences I tried to ask me what I would do as an unaligned AI. Most of my hopes for containment died out in half an hour or so.