I think it’s mostly right, in the sense that any given novel research artifact produced by Visionary A is unlikely to be useful for whatever research is currently pursued by Visionary B. But I think there’s a more diffuse speed-up effect from scale, based on the following already happening:
the intuitions that lead one to a solution might be the sort of thing that you can only see if you’ve been raised with the memes generated by the partial-successes and failures of failed research pathways
The one thing all the different visionaries pushing in different directions do accomplish is mapping out the problem domain. If you’re just prompted with the string “ML research is an existential threat”, and you know nothing else about the topic, there’s a plethora of obvious-at-first-glance lines of inquiry you can go down. Would prosaic alignment somehow not work, and if yes, why? How difficult would it be to interpret a ML model’s internals? Can we prevent a ML model from becoming an agent? Is there some really easy hack to sidestep the problem? Would intelligence scale so sharply that the first AGI failure kills us all? If all you have to start with is just “ML research is an existential threat”, all of these look… maybe not equally plausible, but not like something you can dismiss without at least glancing in that direction. And each glance takes up time.
On the other hand, if you’re entering the field late, after other people have looked in these directions already, surveying the problem landscape is as easy as consuming their research artifacts. Maybe you disagree with some of them, but you can at least see the general shape of the thing, and every additional bit of research clarifies that shape even further. Said “clarity” allows you to better evaluate the problem, and even if you end up disagreeing with everyone else’s priorities, the clearer the vision, the better you should be able to triangulate your own path.
So every bit of research probabilistically decreases the “distance” between the solution and the point at which a new visionary starts. Orrr, maybe not decreases the distance, but allows a new visionary to plot a path that looks less like a random walk and more like a straight line.
I think it’s mostly right, in the sense that any given novel research artifact produced by Visionary A is unlikely to be useful for whatever research is currently pursued by Visionary B. But I think there’s a more diffuse speed-up effect from scale, based on the following already happening:
The one thing all the different visionaries pushing in different directions do accomplish is mapping out the problem domain. If you’re just prompted with the string “ML research is an existential threat”, and you know nothing else about the topic, there’s a plethora of obvious-at-first-glance lines of inquiry you can go down. Would prosaic alignment somehow not work, and if yes, why? How difficult would it be to interpret a ML model’s internals? Can we prevent a ML model from becoming an agent? Is there some really easy hack to sidestep the problem? Would intelligence scale so sharply that the first AGI failure kills us all? If all you have to start with is just “ML research is an existential threat”, all of these look… maybe not equally plausible, but not like something you can dismiss without at least glancing in that direction. And each glance takes up time.
On the other hand, if you’re entering the field late, after other people have looked in these directions already, surveying the problem landscape is as easy as consuming their research artifacts. Maybe you disagree with some of them, but you can at least see the general shape of the thing, and every additional bit of research clarifies that shape even further. Said “clarity” allows you to better evaluate the problem, and even if you end up disagreeing with everyone else’s priorities, the clearer the vision, the better you should be able to triangulate your own path.
So every bit of research probabilistically decreases the “distance” between the solution and the point at which a new visionary starts. Orrr, maybe not decreases the distance, but allows a new visionary to plot a path that looks less like a random walk and more like a straight line.