The given specifications don’t seem to be right, but an AAI (Agoraphobic AI) seems like a good toy problem on the way to an FAI. The design challenge is much simpler, but the general “Gandhi and the murder pill” situation of trying to get the AI to flinch away from anything which would take it outside its bounds is similar.
The given specifications don’t seem to be right, but an AAI (Agoraphobic AI) seems like a good toy problem on the way to an FAI. The design challenge is much simpler, but the general “Gandhi and the murder pill” situation of trying to get the AI to flinch away from anything which would take it outside its bounds is similar.