Does it? ML progress is famously achieved by atheoretical empirical tinkering, i. e. by having a very well-developed intuitive research taste: the exact opposite of well-posed math problems on which o1-3 shine. Something similar seems to be the case with programming: AIs seem bad at architecture/system design.
So it only speeds up the “drudge work”, not the actual load-bearing theoretical work. Which is nonzero speedup, as it allows to test intuitive-theoretical ideas quicker, but it’s more or less isomorphic to having a team of competent-ish intern underlings.
Does it? ML progress is famously achieved by atheoretical empirical tinkering, i. e. by having a very well-developed intuitive research taste: the exact opposite of well-posed math problems on which o1-3 shine. Something similar seems to be the case with programming: AIs seem bad at architecture/system design.
So it only speeds up the “drudge work”, not the actual load-bearing theoretical work. Which is nonzero speedup, as it allows to test intuitive-theoretical ideas quicker, but it’s more or less isomorphic to having a team of competent-ish intern underlings.
Important questions! Thanks for yhe thoughts. More discussion about this here: https://www.lesswrong.com/posts/oC4wv4nTrs2yrP5hz/what-are-the-strongest-arguments-for-very-short-timelines?commentId=nZsFCqbC943hTeiRC