I’m a little confused by the agreement votes with this comment—it seems to me that the consensus around here is that s-risks in which currently-existing humans suffer maximally are very unlikely to occur. This seems an important practical question; could the people who agreement-upvoted elaborate on why they find this kind of thing plausible?
The examples discussed in e.g. the Kaj Sotala interview linked later down the chain tend to regard things like “suffering subroutines”, for example.
The assumption that a misaligned AI will choose to kill us may be false. It would be very cheap to keep us alive/keep copies of us and it may find running experiments on us marginally more valuable. See “More on the ‘human experimentation’ s-risk”:
I’m a little confused by the agreement votes with this comment—it seems to me that the consensus around here is that s-risks in which currently-existing humans suffer maximally are very unlikely to occur. This seems an important practical question; could the people who agreement-upvoted elaborate on why they find this kind of thing plausible?
The examples discussed in e.g. the Kaj Sotala interview linked later down the chain tend to regard things like “suffering subroutines”, for example.
The assumption that a misaligned AI will choose to kill us may be false. It would be very cheap to keep us alive/keep copies of us and it may find running experiments on us marginally more valuable. See “More on the ‘human experimentation’ s-risk”:
https://www.reddit.com/r/SufferingRisk/wiki/intro/#wiki_more_on_the_.22human_experimentation.22_s-risk.3A