my impression is that Rohin has a similar model, although he might put more importance on the last step that I do at this point in the research.
I agree with this summary.
I suspect Daniel Kokotajlo is in a similar position as me; my impression was that he was asking that the output be that-which-makes-AI-risk-arguments-work, and wasn’t making any claims about how the research should be organized.
Good to know that my internal model of you is correct at least on this point.
For Daniel, given his comment on this post, I think we actually agree, but that he puts more explicit emphasis on the that-which-makes-AI-risk-arguments-work, as you wrote.
I agree with this summary.
I suspect Daniel Kokotajlo is in a similar position as me; my impression was that he was asking that the output be that-which-makes-AI-risk-arguments-work, and wasn’t making any claims about how the research should be organized.
Good to know that my internal model of you is correct at least on this point.
For Daniel, given his comment on this post, I think we actually agree, but that he puts more explicit emphasis on the that-which-makes-AI-risk-arguments-work, as you wrote.