“Even if actively trying to push the field forward full-time I’d be a small part of that effort”
I think conditioning on something like ‘we’re broadly correct about AI safety’ implies ‘we’re right about some important things about how AI development will go that the rest of the ML community is surprisingly wrong about’. In that world we’re maybe able to contribute as much as a much larger fraction of the field, due to being correct about some things that everyone else is wrong about.
I think your overall point still stands, but it does seem like you sometimes overestimate how obvious things are to the rest of the ML community
“Even if actively trying to push the field forward full-time I’d be a small part of that effort”
I think conditioning on something like ‘we’re broadly correct about AI safety’ implies ‘we’re right about some important things about how AI development will go that the rest of the ML community is surprisingly wrong about’. In that world we’re maybe able to contribute as much as a much larger fraction of the field, due to being correct about some things that everyone else is wrong about.
I think your overall point still stands, but it does seem like you sometimes overestimate how obvious things are to the rest of the ML community