Yeah, I think that we will need to be careful not to create AIs capable of suffering and commit mindcrimes against them. I also think a confinement is much safer if the AI doesn’t know it is being confined. I endorse Jacob Cannell’s idea for training entirely within a simulation that has carefully censored information such that the sim appears to be the entire universe and doesn’t mention computers or technology. https://www.lesswrong.com/posts/KLS3pADk4S9MSkbqB/review-love-in-a-simbox
Yeah, I think that we will need to be careful not to create AIs capable of suffering and commit mindcrimes against them. I also think a confinement is much safer if the AI doesn’t know it is being confined. I endorse Jacob Cannell’s idea for training entirely within a simulation that has carefully censored information such that the sim appears to be the entire universe and doesn’t mention computers or technology. https://www.lesswrong.com/posts/KLS3pADk4S9MSkbqB/review-love-in-a-simbox