I was referring to your original statements:
...if by the end they can propose an argument capability researchers can’t shoot down[...]I don’t think I could complete this challenge, yet I also predict that I would not then say that alignment is not a real unsolved challenge.
...if by the end they can propose an argument capability researchers can’t shoot down
[...]
I don’t think I could complete this challenge, yet I also predict that I would not then say that alignment is not a real unsolved challenge.
I think you might be construing my statement
if your arguments don’t survive the scrutiny of skeptics, you should probably update away from them.
as that you should take AI risk less seriously if you can’t convince the skeptics, as opposed to if the skeptics can’t convince you.
Ah, I see, that makes more sense, sorry for the misunderstanding.
(Fwiw I and others have in fact talked with skeptics about alignment.)
I was referring to your original statements:
I think you might be construing my statement
as that you should take AI risk less seriously if you can’t convince the skeptics, as opposed to if the skeptics can’t convince you.
Ah, I see, that makes more sense, sorry for the misunderstanding.
(Fwiw I and others have in fact talked with skeptics about alignment.)