Good question. You may think it would be a better overall outcome to show the manipulative one to shock the programmers into breaking the law to (possibly) halt the other AI, but then it is no longer an FAI if it does this.
Training an FAI should be kept free from any real world ‘disaster scenario’ that it may think it needs more power to solve, because the risk it itself becomes an UFAI is amplified for many reasons (false information for one)
Good question. You may think it would be a better overall outcome to show the manipulative one to shock the programmers into breaking the law to (possibly) halt the other AI, but then it is no longer an FAI if it does this.
Training an FAI should be kept free from any real world ‘disaster scenario’ that it may think it needs more power to solve, because the risk it itself becomes an UFAI is amplified for many reasons (false information for one)