That’s good to hear you think that! I’d find it quite helpful to know the results of a survey to the former effect, of the (40? 80?) ML engineers and researchers there, anonymously answering a question like “Insofar as your job involves building large language models, if Dario asked you to stop your work for 2 years while still being paid your salary, how likely would you be to do so (assume the alternative is being fired)? (1-10, Extremely Unlikely, Extremely Likely)” and the same question but “Condition on it looking to you like Anthropic and OpenAI are both 1-3 years from building AGI”. I’d find that evidence quite informative. Hat tip to Habryka for suggesting roughly this question to me a year ago.
(I’m available and willing to iterate on a simple survey to that effect if you are too, and can do some iteration/user-testing with other people.)
(I’ll note that if the org doubles in size every year or two then… well, I don’t know how many x-risk conscious engineers you’ll get, or what sort of enculturation Anthropic will do in order to keep the answer to this up at 90%+.)
Regarding the latter, I’ve DM’d you about the specifics.
That’s good to hear you think that! I’d find it quite helpful to know the results of a survey to the former effect, of the (40? 80?) ML engineers and researchers there, anonymously answering a question like “Insofar as your job involves building large language models, if Dario asked you to stop your work for 2 years while still being paid your salary, how likely would you be to do so (assume the alternative is being fired)? (1-10, Extremely Unlikely, Extremely Likely)” and the same question but “Condition on it looking to you like Anthropic and OpenAI are both 1-3 years from building AGI”. I’d find that evidence quite informative. Hat tip to Habryka for suggesting roughly this question to me a year ago.
(I’m available and willing to iterate on a simple survey to that effect if you are too, and can do some iteration/user-testing with other people.)
(I’ll note that if the org doubles in size every year or two then… well, I don’t know how many x-risk conscious engineers you’ll get, or what sort of enculturation Anthropic will do in order to keep the answer to this up at 90%+.)
Regarding the latter, I’ve DM’d you about the specifics.