Christiano and Yudkowsky both agree AI is an x-risk—a prediction that would distinguish their models does not do much to help us resolve whether or not AI is an x-risk.
I agree with with you wrote, but I am not sure I understand what you meant to imply by it.
My guess at the interpretation is: (1) 1a3orn’s comment cites the Yudkowsky-Christiano discussions as evidence that there has been effort to “find testable double-cruxes on whether AI is a risk or not”, and that effort mostly failed, therefore he claims that attempting to “testable MIRI-OpenAI double crux” is also mostly futile. (2) However, because Christiano and Yudkowsky agree on x-risk, the inference in 1a30rn’s comment is flawed.
Do I understand that correctly? (If so, I definitely agree.)
Christiano and Yudkowsky both agree AI is an x-risk—a prediction that would distinguish their models does not do much to help us resolve whether or not AI is an x-risk.
I agree with with you wrote, but I am not sure I understand what you meant to imply by it.
My guess at the interpretation is: (1) 1a3orn’s comment cites the Yudkowsky-Christiano discussions as evidence that there has been effort to “find testable double-cruxes on whether AI is a risk or not”, and that effort mostly failed, therefore he claims that attempting to “testable MIRI-OpenAI double crux” is also mostly futile. (2) However, because Christiano and Yudkowsky agree on x-risk, the inference in 1a30rn’s comment is flawed.
Do I understand that correctly? (If so, I definitely agree.)