There’s a kind of “conservation of expected evidence” here: if the critiques had succeeded, you’d have reduced the probability of AI risk, so their failure must push you in the opposite direction.
This doesn’t sound quite right. I think what matters is the amount of cogent, non-redundant thought about the subject demonstrated by the failed critiques. If the critique is just generally bad, it shouldn’t have an impact either way (I could randomly computer-generate millions of incoherent critiques of creationism, but that wouldn’t affect my credence)
Similarly if a hundred experts make the same flawed counterargument, there would be rapidly diminishing returns after the first few.
I thought of that. If they all think of the same thing, they could be going for something obvious but not quite right. Something that pops out at this class of expert and so they don’t feel the need to go any further. But if you have an answer to that, that doesn’t mean there isn’t a remaining problem not available to surface inspection.
This doesn’t sound quite right. I think what matters is the amount of cogent, non-redundant thought about the subject demonstrated by the failed critiques. If the critique is just generally bad, it shouldn’t have an impact either way (I could randomly computer-generate millions of incoherent critiques of creationism, but that wouldn’t affect my credence)
Similarly if a hundred experts make the same flawed counterargument, there would be rapidly diminishing returns after the first few.
Actually, no: because this would be a sign that they aren’t capable of finding a better counterargument!
I thought of that. If they all think of the same thing, they could be going for something obvious but not quite right. Something that pops out at this class of expert and so they don’t feel the need to go any further. But if you have an answer to that, that doesn’t mean there isn’t a remaining problem not available to surface inspection.