Considering that I don’t know the AI’s origin, I don’t have any reason to believe that the AIs creators, even if well-intentioned, had the astronomical skill necessary to make the AI Friendly. So my prior P(AI is Friendly) is sufficiently low that I am comfortable precommitting to never let the AI out of the box, no matter what. If the AI was smart enough, it could likely uncover enough emotional buttons that I wouldn’t stand much of a chance anyways, since I’m a primate.
Considering that I don’t know the AI’s origin, I don’t have any reason to believe that the AIs creators, even if well-intentioned, had the astronomical skill necessary to make the AI Friendly. So my prior P(AI is Friendly) is sufficiently low that I am comfortable precommitting to never let the AI out of the box, no matter what. If the AI was smart enough, it could likely uncover enough emotional buttons that I wouldn’t stand much of a chance anyways, since I’m a primate.