Does this have any salient AI milestones that are not just straightforward engineering, on the longer timelines? What kind of AI architecture does it bet on for shorter timelines?
My expectation is similar, and collapsed from 2032-2042 to 2028-2037 (25%/75% quantiles to mature future tech) a couple of weeks ago, because I noticed that the two remaining scientific milestones are essentially done. One is figuring out how to improve LLM performance given lack of orders of magnitude more raw training data, which now seems probably unnecessary with how well ChatGPT works already. And the other is setting up longer-term memory for LLM instances, which now seems unnecessary because day-long context windows for LLMs are within reach. This gives a significant affordance to build complicated bureaucracies and debug them by adding more rules and characters, ensuring that they correctly perform their tasks autonomously. Even if 90% of a conversation is about finagling it back on track, there is enough room in the context window to still get things done.
So it’s looking like the only thing left is some engineering work in setting up bureaucracies that self-tune LLMs into reliable autonomous performance, at which point it’s something at least as capable as day-long APS-AI LLM spurs that might need another 1-3 years to bootstrap to future tech. In contrast to your story, I anticipate much slower visible deployment, so that the world changes much less in the meantime.
I don’t want to get into too much specifics. That said it sounds like we have somewhat similar views, I just am a bit more bullish for some reason.
I wonder if we should make some bets about what visible deployments will look like in, say, 2024? I take it you’ve read my story—wanna leave a comment sketching which parts you disagree with or think will take longer?
Basically, I don’t see chatbots being significantly more useful than today until they are already AGI and can teach themselves things like homological algebra, 1-2 years before the singularity. This is a combination of short timelines not giving time to polish them enough, and polishing them enough being sufficient to reach AGI.
OK. What counts as significantly more useful than today? Would you say e.g. that the stuff depicted in 2024-2026 in my story, generally won’t happen until 2030? Perhaps with a few exceptions?
Does this have any salient AI milestones that are not just straightforward engineering, on the longer timelines? What kind of AI architecture does it bet on for shorter timelines?
My expectation is similar, and collapsed from 2032-2042 to 2028-2037 (25%/75% quantiles to mature future tech) a couple of weeks ago, because I noticed that the two remaining scientific milestones are essentially done. One is figuring out how to improve LLM performance given lack of orders of magnitude more raw training data, which now seems probably unnecessary with how well ChatGPT works already. And the other is setting up longer-term memory for LLM instances, which now seems unnecessary because day-long context windows for LLMs are within reach. This gives a significant affordance to build complicated bureaucracies and debug them by adding more rules and characters, ensuring that they correctly perform their tasks autonomously. Even if 90% of a conversation is about finagling it back on track, there is enough room in the context window to still get things done.
So it’s looking like the only thing left is some engineering work in setting up bureaucracies that self-tune LLMs into reliable autonomous performance, at which point it’s something at least as capable as day-long APS-AI LLM spurs that might need another 1-3 years to bootstrap to future tech. In contrast to your story, I anticipate much slower visible deployment, so that the world changes much less in the meantime.
I don’t want to get into too much specifics. That said it sounds like we have somewhat similar views, I just am a bit more bullish for some reason.
I wonder if we should make some bets about what visible deployments will look like in, say, 2024? I take it you’ve read my story—wanna leave a comment sketching which parts you disagree with or think will take longer?
Basically, I don’t see chatbots being significantly more useful than today until they are already AGI and can teach themselves things like homological algebra, 1-2 years before the singularity. This is a combination of short timelines not giving time to polish them enough, and polishing them enough being sufficient to reach AGI.
OK. What counts as significantly more useful than today? Would you say e.g. that the stuff depicted in 2024-2026 in my story, generally won’t happen until 2030? Perhaps with a few exceptions?