Hmm, while I share your view about the timelines getting shorter and apparent capabilities growing leaps and bounds almost daily, I still wonder if the “recursively self-improving” part is anywhere on the horizon. Or maybe it is not necessary before everything goes boom? I would be more concerned if there was a feedback loop of improvement, potentially with “brainwashed” humans in the loop. Maybe it’s coming. I would also be concerned if/once there is a scientific or technological breakthrough thanks to an AI (not just protein folding or exploring too-many-for-a-human possible cases for some mathematical proof). And, yeah, physical world navigation is kind of lagging, too. It all might change one day soon, of course. Someone trains an LLM on fundamental physics papers, and, bam! quantum gravity pops out. Or the proof of the Riemann hypothesis. Or maybe some open problem in computational complexity theory (not necessarily P != NP). Seems unlikely at this point though.
Hmm, while I share your view about the timelines getting shorter and apparent capabilities growing leaps and bounds almost daily, I still wonder if the “recursively self-improving” part is anywhere on the horizon. Or maybe it is not necessary before everything goes boom? I would be more concerned if there was a feedback loop of improvement, potentially with “brainwashed” humans in the loop. Maybe it’s coming. I would also be concerned if/once there is a scientific or technological breakthrough thanks to an AI (not just protein folding or exploring too-many-for-a-human possible cases for some mathematical proof). And, yeah, physical world navigation is kind of lagging, too. It all might change one day soon, of course. Someone trains an LLM on fundamental physics papers, and, bam! quantum gravity pops out. Or the proof of the Riemann hypothesis. Or maybe some open problem in computational complexity theory (not necessarily P != NP). Seems unlikely at this point though.