I’m a big life extension supporter but being unable to choose to die ever is a literal hell. As dark as it is, if these scenarios are likely, it seems the rational thing to do is die before AGI comes.
Killing all of humanity is bad enough, but how concerned should we be about even worse scenarios?
If you really expect unfriendly superinteligent AI, you should also consider that it will be able to resurrect the dead (may be running simulations of the past in very large numbers), so suicide will not help.
Moreover, such AI may deliberately go against people who tried to escape, in order to acausaly deter them from suicide.
However, I do not afraid of this as I assume that Friendly AIs can “save” minds from hell of bad AIs via creating them in even larger numbers in simulations.