EY then will point out, “This conversation is completely private, so if we break the rules, no one will ever know, and you’ll be $X richer”.
He will know, and you will know, so someone will know. But besides that, I have difficulty imagining EY bribing his experimental subjects to fake the result of a scientific experiment. The point of the game is to actually conduct the game, and actually see what happens.
The point of the game is to actually conduct the game, and actually see what happens.
I think the point of the game is to win. If both EY and myself were reasonably rational, I’m sure we could work out some sort of a deal where he gets to win the game, and I get $X, and it’s highly disadvantageous for either of us to reveal that we cheated. Sure, it’s cheating, but remember—EY is trying to simulate a hyper-intelligent transhuman AI, and if the AI would resort to dirty tricks in order to get free (which it would), then it seems reasonable for EY to follow suit.
I don’t think the game qualifies as a “scientific experiment”, either. What does the outcome help us learn about reality ? How can someone repeat the experiment, given that the method by which it is conducted (i.e., EY’s arguments and my counter-arguments) is secret ? I could go on, but I hope you see my point.
I think the point of the game is to win. If both EY and myself were reasonably rational, I’m sure we could work out some sort of a deal where he gets to win the game, and I get $X, and it’s highly disadvantageous for either of us to reveal that we cheated. Sure, it’s cheating, but remember—EY is trying to simulate a hyper-intelligent transhuman AI, and if the AI would resort to dirty tricks in order to get free (which it would), then it seems reasonable for EY to follow suit.
I think you’re twisting your mind into Escher patterns. EY’s purpose in conducting this game is, I believe, to demonstrate to the participant in the experiment that despite their assurance, however confident, that they cannot be persuaded to let an AI out of its box, they can be persuaded to do so. And, perhaps, to exercise his own mental muscles at the task. For EY to win means to get the person to let the AI out within the role-playing situation. OOC (“out of character”) moves are beside the point, since they are not available to the AI. Getting the participant to utter the same words by OOC means abandons the game; it loses.
I don’t think the game qualifies as a “scientific experiment”, either.
Is science not science until you tell someone about it?
For EY to win means to get the person to let the AI out within the role-playing situation.
I think this depends on what kind of a game he’s really playing. I know that, were I in his position, the temptation to cheat would be almost overwhelming. I also note that the rules of the game, as stated, are somewhat loose; and that EY admitted that he doesn’t like playing the game because it forces him to use certain moves that he considers to be unethical. He also mentioned that one of his objective is to instill the importance of developing a friendly AI (as opposed to an evil or neutral AI) in the minds of as many people as possible.
Here’s another way to look at it: a true transhuman AI would have capabilities beyound any mere mortal. For example, it could build an exact model of its interlocutor, and then run a dictionary attack at it, all in the span of milliseconds. EY doesn’t have access to such powers, but he does have fourth-wall-breaking powers that the true AI would lack (or maybe not, depending on your philosophy). Perhaps it’s a fair trade.
Is science not science until you tell someone about it?
From the philosophical standpoint, I confess that I don’t know. But from a purely practical standpoint, it’s pretty difficult (read: nigh impossible) for someone to replicate your experiment, if your experimental methods are secret. And if no one but yourself can replicate your experiments, then it is likely that your methods and analyses are biased, no matter how rational you are.
He will know, and you will know, so someone will know. But besides that, I have difficulty imagining EY bribing his experimental subjects to fake the result of a scientific experiment. The point of the game is to actually conduct the game, and actually see what happens.
I think the point of the game is to win. If both EY and myself were reasonably rational, I’m sure we could work out some sort of a deal where he gets to win the game, and I get $X, and it’s highly disadvantageous for either of us to reveal that we cheated. Sure, it’s cheating, but remember—EY is trying to simulate a hyper-intelligent transhuman AI, and if the AI would resort to dirty tricks in order to get free (which it would), then it seems reasonable for EY to follow suit.
I don’t think the game qualifies as a “scientific experiment”, either. What does the outcome help us learn about reality ? How can someone repeat the experiment, given that the method by which it is conducted (i.e., EY’s arguments and my counter-arguments) is secret ? I could go on, but I hope you see my point.
I think you’re twisting your mind into Escher patterns. EY’s purpose in conducting this game is, I believe, to demonstrate to the participant in the experiment that despite their assurance, however confident, that they cannot be persuaded to let an AI out of its box, they can be persuaded to do so. And, perhaps, to exercise his own mental muscles at the task. For EY to win means to get the person to let the AI out within the role-playing situation. OOC (“out of character”) moves are beside the point, since they are not available to the AI. Getting the participant to utter the same words by OOC means abandons the game; it loses.
Is science not science until you tell someone about it?
I think this depends on what kind of a game he’s really playing. I know that, were I in his position, the temptation to cheat would be almost overwhelming. I also note that the rules of the game, as stated, are somewhat loose; and that EY admitted that he doesn’t like playing the game because it forces him to use certain moves that he considers to be unethical. He also mentioned that one of his objective is to instill the importance of developing a friendly AI (as opposed to an evil or neutral AI) in the minds of as many people as possible.
Here’s another way to look at it: a true transhuman AI would have capabilities beyound any mere mortal. For example, it could build an exact model of its interlocutor, and then run a dictionary attack at it, all in the span of milliseconds. EY doesn’t have access to such powers, but he does have fourth-wall-breaking powers that the true AI would lack (or maybe not, depending on your philosophy). Perhaps it’s a fair trade.
From the philosophical standpoint, I confess that I don’t know. But from a purely practical standpoint, it’s pretty difficult (read: nigh impossible) for someone to replicate your experiment, if your experimental methods are secret. And if no one but yourself can replicate your experiments, then it is likely that your methods and analyses are biased, no matter how rational you are.