The flip side of this is the “Failure to Convince” argument. If experts being unable to knock down an argument is evidence for that argument then the argument failing to convince the experts evidence against the argument.
Assuming, of course, that there is evidence of the experts having actually considered the argument properly, instead of just giving the first or second answer that came to mind in response to it. (Or to be exact, the first-thought-that-came-to-mind dismissal is evidence too, but very weak such.)
This is unfortunately reminiscent of the standard LW objection “well, they’re only smart in the lab” to any scientist who doesn’t support local tropes.
This is unfortunately reminiscent of the standard LW objection “well, they’re only smart in the lab” to any scientist who doesn’t support local tropes.
It’s only strong evidence if there’s a good chance that a good argument will convince them. Would you expect a good argument to convince an expert creationist that he’s wrong?
Yes, this is correct (on conservation of expected evidence, if nothing else: if experts were all convinced, then we’d believe the argument more).
Now, there’s a whole host of reasons to suspect that the experts are not being rational with the AI risk argument (and if we didn’t have those reasons, we’d be even more worried!), but the failure to convince experts is a (small) worry. Especially so with experts who have really grappled with the arguments and still reject them (though I can only think of one person in that category: Robin Hanson).
The flip side of this is the “Failure to Convince” argument. If experts being unable to knock down an argument is evidence for that argument then the argument failing to convince the experts evidence against the argument.
Assuming, of course, that there is evidence of the experts having actually considered the argument properly, instead of just giving the first or second answer that came to mind in response to it. (Or to be exact, the first-thought-that-came-to-mind dismissal is evidence too, but very weak such.)
This is unfortunately reminiscent of the standard LW objection “well, they’re only smart in the lab” to any scientist who doesn’t support local tropes.
I don’t believe that is a standard LW objection.
It’s only strong evidence if there’s a good chance that a good argument will convince them. Would you expect a good argument to convince an expert creationist that he’s wrong?
Yes, this is correct (on conservation of expected evidence, if nothing else: if experts were all convinced, then we’d believe the argument more).
Now, there’s a whole host of reasons to suspect that the experts are not being rational with the AI risk argument (and if we didn’t have those reasons, we’d be even more worried!), but the failure to convince experts is a (small) worry. Especially so with experts who have really grappled with the arguments and still reject them (though I can only think of one person in that category: Robin Hanson).