In the ball-attached-to-a-pole example, the honest debater has assigned probabilities that are indistinguishable from what you would do if you knew noting except that the claim is false. (I.e., assign probabilities that doubt each component equally.) I’m curious how difficult it is to find the flaw in this argument structure. Have you done anything like showing these transcripts to other experts and seeing if they will be able to answer it?
If I had to summarize this finding in one sentence, it would be “it seems like an expert can generally find a set of arguments for a false claim that is flawed such that an equally competent expert can’t identify the flawed component, and the set of arguments doesn’t immediately look suspect”. This seems surprising, and I’m wondering whether it’s unique to physics. (The cryptographic example was of this kind, but there, the structure of the dishonest arguments was suspect.)
If this finding holds, my immediate reaction is “okay, in this case, the solution for the honest debater is to start a debate about whether the set of arguments from the dishonest debater has this character”. I’m not sure how good this sounds. I think my main issue here is that I don’t know enough physics understand why the dishonest arguments are hard to identify
In the ball-attached-to-a-pole example, the honest debater has assigned probabilities that are indistinguishable from what you would do if you knew noting except that the claim is false. (I.e., assign probabilities that doubt each component equally.) I’m curious how difficult it is to find the flaw in this argument structure. Have you done anything like showing these transcripts to other experts and seeing if they will be able to answer it?
Not systematically; I would be excited about people doing these experiments. One tricky thing is that you might think this is a strategy that’s possible for ML models, but that humans aren’t naturally very good.
If I had to summarize this finding in one sentence, it would be “it seems like an expert can generally find a set of arguments for a false claim that is flawed such that an equally competent expert can’t identify the flawed component, and the set of arguments doesn’t immediately look suspect”. This seems surprising, and I’m wondering whether it’s unique to physics. (The cryptographic example was of this kind, but there, the structure of the dishonest arguments was suspect.)
Yeah, this is a great summary. One thing I would clarify is that it’s sufficient that the set of arguments don’t look suspicious to the judge. The arguments might look suspicious to the expert, but unless they have a way to explain to the judge why it’s suspicious, we still have a problem.
If this finding holds, my immediate reaction is “okay, in this case, the solution for the honest debater is to start a debate about whether the set of arguments from the dishonest debater has this character”. I’m not sure how good this sounds. I think my main issue here is that I don’t know enough physics understand why the dishonest arguments are hard to identify
Yeah, I think that is the obvious next step. The concern is that the reasons the argument is suspicious may be hard to justify in a debate, especially if they’re reasons of the form ‘look, I’ve done a bunch of physics problems, and approaching it this way feels like it will makes things messy, whereas approaching it this way feels cleaner’. Debate probably doesn’t work very well for supervising knowledge that’s gained through finding patterns in data, as opposed to knowledge that’s gained through step-by-step reasoning. Something like imitative generalisation (AKA ‘learning the prior’) is trying to fill this gap.
In the ball-attached-to-a-pole example, the honest debater has assigned probabilities that are indistinguishable from what you would do if you knew noting except that the claim is false. (I.e., assign probabilities that doubt each component equally.) I’m curious how difficult it is to find the flaw in this argument structure. Have you done anything like showing these transcripts to other experts and seeing if they will be able to answer it?
If I had to summarize this finding in one sentence, it would be “it seems like an expert can generally find a set of arguments for a false claim that is flawed such that an equally competent expert can’t identify the flawed component, and the set of arguments doesn’t immediately look suspect”. This seems surprising, and I’m wondering whether it’s unique to physics. (The cryptographic example was of this kind, but there, the structure of the dishonest arguments was suspect.)
If this finding holds, my immediate reaction is “okay, in this case, the solution for the honest debater is to start a debate about whether the set of arguments from the dishonest debater has this character”. I’m not sure how good this sounds. I think my main issue here is that I don’t know enough physics understand why the dishonest arguments are hard to identify
Not systematically; I would be excited about people doing these experiments. One tricky thing is that you might think this is a strategy that’s possible for ML models, but that humans aren’t naturally very good.
Yeah, this is a great summary. One thing I would clarify is that it’s sufficient that the set of arguments don’t look suspicious to the judge. The arguments might look suspicious to the expert, but unless they have a way to explain to the judge why it’s suspicious, we still have a problem.
Yeah, I think that is the obvious next step. The concern is that the reasons the argument is suspicious may be hard to justify in a debate, especially if they’re reasons of the form ‘look, I’ve done a bunch of physics problems, and approaching it this way feels like it will makes things messy, whereas approaching it this way feels cleaner’. Debate probably doesn’t work very well for supervising knowledge that’s gained through finding patterns in data, as opposed to knowledge that’s gained through step-by-step reasoning. Something like imitative generalisation (AKA ‘learning the prior’) is trying to fill this gap.