Would you not agree that (assuming there’s an easy way of doing it), separating the system from hyperexistential risk is a good thing for psychological reasons? Even if you think it’s extremely unlikely, I’m not at all comfortable with the thought that our seed AI could screw up & design a successor that implements the opposite of our values; and I suspect there are at least some others who share that anxiety.
For the record, I think that this is also a risk worth worrying about for non-psychological reasons.
Would you not agree that (assuming there’s an easy way of doing it), separating the system from hyperexistential risk is a good thing for psychological reasons? Even if you think it’s extremely unlikely, I’m not at all comfortable with the thought that our seed AI could screw up & design a successor that implements the opposite of our values; and I suspect there are at least some others who share that anxiety.
For the record, I think that this is also a risk worth worrying about for non-psychological reasons.