I thought about how I could most efficiently update my and Rohin’s views on this question.
My best ideas are: 1. Get information directly on this question. What can we learn from surveys of AI researchers or from public statements from AI researchers?
2. Get information on the question’s reference class. What can we learn about how researchers working on other emerging technologies that might have huge risks thought about those risks?
I did a bit of research/thinking on these, which provided a small update towards thinking that AGI researchers will evaluate AGI risks appropriately.
I think that there’s a bunch more research that would be helpful—in particular, does anyone know of surveys of AI researchers on their views on safety?
I thought about how I could most efficiently update my and Rohin’s views on this question.
My best ideas are:
1. Get information directly on this question. What can we learn from surveys of AI researchers or from public statements from AI researchers?
2. Get information on the question’s reference class. What can we learn about how researchers working on other emerging technologies that might have huge risks thought about those risks?
I did a bit of research/thinking on these, which provided a small update towards thinking that AGI researchers will evaluate AGI risks appropriately.
I think that there’s a bunch more research that would be helpful—in particular, does anyone know of surveys of AI researchers on their views on safety?