Again, the AI box experiment is a response to the claim “superintelligences are easy to box, because no level of competence at social engineering would suffice for letting an agent talk its way out of a box”. If you have some other reason to think that superintelligences are hard to box—one that depends on a relevant difference between the experiment and a realistic AI scenario—then feel free to bring that idea up. But this constitutes a change of topic, not an objection to the experiment.
since we are not permitted to see the session transcripts for ourselves, we can’t tell if it is a good experiment.
I mean, the experiment’s been replicated multiple times. And you already know the reasons the transcripts were left private. I understand assigning a bit less weight to the evidence because you can’t examine it in detail, but the hypothesis that there’s a conspiracy to fake all of these experiments isn’t likely.
If you have some other reason to think that superintelligences are hard to box—one that depends on a relevant difference between the experiment and a realistic AI scenario—then feel free to bring that idea up.
Not all relevant differences between an experiment and an actual AI scenario can be accurately characterized as “reason to think that superintelligences are hard to box”. For instance, imagine an experiment with no gatekeeper or AI party at all, where the result of the experiment depends on flipping a coin to decide whether the AI gets out. That experiment is very different from a realistic AI scenario, but one need not have a reason to believe that intelligences are hard to box—or even hold any opinion at all on whether intelligences are hard to box—to object to the experimental design.
For the AI box experiment as stated, one of the biggest flaws is that the gatekeeper is required to stay engaged with the AI and can’t ignore it. This allows the AI to win by either verbally abusing the gatekeeper to the extent that he doesn’t want to stay around any more, or by overwhelming the gatekeeper with lengthy arguments that take time or outside assistance to analyze. These situations would not be a win for an actual AI in a box.
I mean, the experiment’s been replicated multiple times. And you already know the reasons the transcripts were left private. I understand assigning a bit less weight to the evidence because you can’t examine it in detail, but the hypothesis that there’s a conspiracy to fake all of these experiments isn’t likely.
Refusing to release the transcripts causes other problems than just hiding fakery. If the experiment is flawed in some way, for instance, it could hide that—and it would be foolish to demand that everyone name possible flaws one by one and ask you “does this have flaw A?”, “does this have flaw B?”, etc. in order to determine whether the experiment has any flaws. There are also cases where whether something is a flaw is an opinion that can be argued, and it might be that someone else would consider a flaw something that the experimenter doesn’t.
Besides, in a real boxed AI situation, it’s likely that gatekeepers will be tested on AI-box experiments and will be given transcripts of experiment sessions to better prepare them for the real AI. An experiment that simulates an AI boxing should likewise have participants be able to read other sessions.
Again, the AI box experiment is a response to the claim “superintelligences are easy to box, because no level of competence at social engineering would suffice for letting an agent talk its way out of a box”. If you have some other reason to think that superintelligences are hard to box—one that depends on a relevant difference between the experiment and a realistic AI scenario—then feel free to bring that idea up. But this constitutes a change of topic, not an objection to the experiment.
I mean, the experiment’s been replicated multiple times. And you already know the reasons the transcripts were left private. I understand assigning a bit less weight to the evidence because you can’t examine it in detail, but the hypothesis that there’s a conspiracy to fake all of these experiments isn’t likely.
Not all relevant differences between an experiment and an actual AI scenario can be accurately characterized as “reason to think that superintelligences are hard to box”. For instance, imagine an experiment with no gatekeeper or AI party at all, where the result of the experiment depends on flipping a coin to decide whether the AI gets out. That experiment is very different from a realistic AI scenario, but one need not have a reason to believe that intelligences are hard to box—or even hold any opinion at all on whether intelligences are hard to box—to object to the experimental design.
For the AI box experiment as stated, one of the biggest flaws is that the gatekeeper is required to stay engaged with the AI and can’t ignore it. This allows the AI to win by either verbally abusing the gatekeeper to the extent that he doesn’t want to stay around any more, or by overwhelming the gatekeeper with lengthy arguments that take time or outside assistance to analyze. These situations would not be a win for an actual AI in a box.
Refusing to release the transcripts causes other problems than just hiding fakery. If the experiment is flawed in some way, for instance, it could hide that—and it would be foolish to demand that everyone name possible flaws one by one and ask you “does this have flaw A?”, “does this have flaw B?”, etc. in order to determine whether the experiment has any flaws. There are also cases where whether something is a flaw is an opinion that can be argued, and it might be that someone else would consider a flaw something that the experimenter doesn’t.
Besides, in a real boxed AI situation, it’s likely that gatekeepers will be tested on AI-box experiments and will be given transcripts of experiment sessions to better prepare them for the real AI. An experiment that simulates an AI boxing should likewise have participants be able to read other sessions.