The assumption that a misaligned AI will choose to kill us may be false. It would be very cheap to keep us alive/keep copies of us and it may find running experiments on us marginally more valuable. See “More on the ‘human experimentation’ s-risk”:
https://www.reddit.com/r/SufferingRisk/wiki/intro/#wiki_more_on_the_.22human_experimentation.22_s-risk.3A
The assumption that a misaligned AI will choose to kill us may be false. It would be very cheap to keep us alive/keep copies of us and it may find running experiments on us marginally more valuable. See “More on the ‘human experimentation’ s-risk”:
https://www.reddit.com/r/SufferingRisk/wiki/intro/#wiki_more_on_the_.22human_experimentation.22_s-risk.3A