For the record I think this would also be valuable! If as an alignment researcher your arguments don’t survive the scrutiny of skeptics, you should probably update away from them. I think maybe what you’re highlighting here is the operationalization of “shoot down”, which I wholeheartedly agree is the actual problem.
Re: the quantities of funding, I know you’re being facetious, but just to point it out, the economic value of of “capabilities researchers being accidentally too optimistic about alignment” and “alignment researchers being too pessimistic about alignment” are asymmetric.
If as an alignment researcher your arguments don’t survive the scrutiny of skeptics, you should probably update away from them.
If that’s your actual belief you should probably update away now. People have tried to do this for years, and in fact in most cases the skeptics were not convinced.
Personally, I’m much more willing to say that they’re wrong and so I don’t update very much.
Re: the quantities of funding, I know you’re being facetious, but just to point it out, the economic value of of “capabilities researchers being accidentally too optimistic about alignment” and “alignment researchers being too pessimistic about alignment” are asymmetric.
Yeah, that was indeed just humor, and I agree with the point.
For the record I think this would also be valuable! If as an alignment researcher your arguments don’t survive the scrutiny of skeptics, you should probably update away from them. I think maybe what you’re highlighting here is the operationalization of “shoot down”, which I wholeheartedly agree is the actual problem.
Re: the quantities of funding, I know you’re being facetious, but just to point it out, the economic value of of “capabilities researchers being accidentally too optimistic about alignment” and “alignment researchers being too pessimistic about alignment” are asymmetric.
If that’s your actual belief you should probably update away now. People have tried to do this for years, and in fact in most cases the skeptics were not convinced.
Personally, I’m much more willing to say that they’re wrong and so I don’t update very much.
Yeah, that was indeed just humor, and I agree with the point.
I was referring to your original statements:
I think you might be construing my statement
as that you should take AI risk less seriously if you can’t convince the skeptics, as opposed to if the skeptics can’t convince you.
Ah, I see, that makes more sense, sorry for the misunderstanding.
(Fwiw I and others have in fact talked with skeptics about alignment.)