Slightly orthogonal comment here. One crux in some AI timelines might be how good we should expect AI to get at research taste, and how soon we should expect this to happen.
E.g. some fairly load-bearing claims in AI 2027:
“Human-only years from automated median researcher to superhuman coder: ~0-3 [years] assuming SC has 25th percentile research taste, more uncertainty if not.”
“The research taste gap alone is about 3x between their company’s median and best researchers.”
I’d be interested in [a future post noting a few of] your thoughts as to
To what extent using these categories (e.g. ‘25th percentile’, ‘3x median’) to refer to research taste make sense,
To what extent you think these are reasonable estimates (if that is a sensible question to ask), or what you think reasonable estimates should be,
What you think this might mean for ‘AI progress’, if anything,
^Why all of the above is actually really hard to think about, and why we should be more uncertain than we are about how helpful AI will be here.^
I think it would be reasonable not to prioritise this; but if it did strike you as an important question, you might be well placed to comment.
Needless to say, I found this a clear and useful post regardless :)
Thanks for this!
Slightly orthogonal comment here. One crux in some AI timelines might be how good we should expect AI to get at research taste, and how soon we should expect this to happen.
E.g. some fairly load-bearing claims in AI 2027:
“Human-only years from automated median researcher to superhuman coder: ~0-3 [years] assuming SC has 25th percentile research taste, more uncertainty if not.”
“The research taste gap alone is about 3x between their company’s median and best researchers.”
I’d be interested in [a future post noting a few of] your thoughts as to
To what extent using these categories (e.g. ‘25th percentile’, ‘3x median’) to refer to research taste make sense,
To what extent you think these are reasonable estimates (if that is a sensible question to ask), or what you think reasonable estimates should be,
What you think this might mean for ‘AI progress’, if anything,
^Why all of the above is actually really hard to think about, and why we should be more uncertain than we are about how helpful AI will be here.^
I think it would be reasonable not to prioritise this; but if it did strike you as an important question, you might be well placed to comment.
Needless to say, I found this a clear and useful post regardless :)