An uninvestigated crux of the AI doom debate seems to be pessimism regarding current AI research agendas. For instance, I feel rather positive about ELK’s prospects, but in trying to put some numbers on this feeling, I realized I have no sense of base rates for research program’s success, nor their average time horizon. I can’t seem to think of any relevant Metaculus questions either.
What could be some relevant reference classes for AI safety research program’s success odds? Seems most similar to disciplines with both engineering and mathematical aspects driven by applications. Perhaps research agendas on proteins, material sciences, etc. It’d be especially interesting to see how many research agendas ended up not panning out, i.e. cataloguing events like ‘a search for a material with X tensile strength and Y lightness starting in year Z was eventually given up on in year Z+i’.
Research Agenda Base Rates and Forecasting
An uninvestigated crux of the AI doom debate seems to be pessimism regarding current AI research agendas. For instance, I feel rather positive about ELK’s prospects, but in trying to put some numbers on this feeling, I realized I have no sense of base rates for research program’s success, nor their average time horizon. I can’t seem to think of any relevant Metaculus questions either.
What could be some relevant reference classes for AI safety research program’s success odds? Seems most similar to disciplines with both engineering and mathematical aspects driven by applications. Perhaps research agendas on proteins, material sciences, etc. It’d be especially interesting to see how many research agendas ended up not panning out, i.e. cataloguing events like ‘a search for a material with X tensile strength and Y lightness starting in year Z was eventually given up on in year Z+i’.