For EY to win means to get the person to let the AI out within the role-playing situation.
I think this depends on what kind of a game he’s really playing. I know that, were I in his position, the temptation to cheat would be almost overwhelming. I also note that the rules of the game, as stated, are somewhat loose; and that EY admitted that he doesn’t like playing the game because it forces him to use certain moves that he considers to be unethical. He also mentioned that one of his objective is to instill the importance of developing a friendly AI (as opposed to an evil or neutral AI) in the minds of as many people as possible.
Here’s another way to look at it: a true transhuman AI would have capabilities beyound any mere mortal. For example, it could build an exact model of its interlocutor, and then run a dictionary attack at it, all in the span of milliseconds. EY doesn’t have access to such powers, but he does have fourth-wall-breaking powers that the true AI would lack (or maybe not, depending on your philosophy). Perhaps it’s a fair trade.
Is science not science until you tell someone about it?
From the philosophical standpoint, I confess that I don’t know. But from a purely practical standpoint, it’s pretty difficult (read: nigh impossible) for someone to replicate your experiment, if your experimental methods are secret. And if no one but yourself can replicate your experiments, then it is likely that your methods and analyses are biased, no matter how rational you are.
I think this depends on what kind of a game he’s really playing. I know that, were I in his position, the temptation to cheat would be almost overwhelming. I also note that the rules of the game, as stated, are somewhat loose; and that EY admitted that he doesn’t like playing the game because it forces him to use certain moves that he considers to be unethical. He also mentioned that one of his objective is to instill the importance of developing a friendly AI (as opposed to an evil or neutral AI) in the minds of as many people as possible.
Here’s another way to look at it: a true transhuman AI would have capabilities beyound any mere mortal. For example, it could build an exact model of its interlocutor, and then run a dictionary attack at it, all in the span of milliseconds. EY doesn’t have access to such powers, but he does have fourth-wall-breaking powers that the true AI would lack (or maybe not, depending on your philosophy). Perhaps it’s a fair trade.
From the philosophical standpoint, I confess that I don’t know. But from a purely practical standpoint, it’s pretty difficult (read: nigh impossible) for someone to replicate your experiment, if your experimental methods are secret. And if no one but yourself can replicate your experiments, then it is likely that your methods and analyses are biased, no matter how rational you are.