Yes, but it will be very expensive, as RAI would have spend energy of million galaxies to create indexical uncertainty in galaxy size benevolent AI. Thus it will be cheaper to just preserve humanity which requires around 100 billion tonns of material. Also, creating indexical uncertainty will require preserving humans for the simulation and even emulating benevolent AI—so we still have not bad outcome for humans.
However, indexical wars could be more complex. They may include s-risks, that is Rogue AIs torture humans to get advantage or bargian point over benevolent AI etc. I hope it will be cheaper for all sides to be benevolent than to start almost infinite ladder of indexical war.
Yes, but it will be very expensive, as RAI would have spend energy of million galaxies to create indexical uncertainty in galaxy size benevolent AI. Thus it will be cheaper to just preserve humanity which requires around 100 billion tonns of material. Also, creating indexical uncertainty will require preserving humans for the simulation and even emulating benevolent AI—so we still have not bad outcome for humans.
However, indexical wars could be more complex. They may include s-risks, that is Rogue AIs torture humans to get advantage or bargian point over benevolent AI etc. I hope it will be cheaper for all sides to be benevolent than to start almost infinite ladder of indexical war.