The predictions about AI-adjacent things seem weird when we condition on AGI not taking off by 2040. Conditional on that, it seems like the most likely world is one where the current scaling trends play out on the current problems, but current methods turned out to not generalize very well to most real-world problems (especially problems without readily-available giant data sets, or problems in non-controlled environments). In other words, this turns out pretty similar to previous AI/ML booms: a new class of problems is solved, but that class is limited, and we go into another AI winter afterwards.
In that world, I’d expect deep learning to be used commercially for things which we’re already close to: procedural generation of graphics for games and maybe some movies, auto-generation of low-quality written works (for use-cases which don’t involve readers paying close attention) or derivative works (like translations or summaries), that sort of thing. In most cases, it probably won’t be end-to-end ML, just tools for particular steps. Prompt programming mostly turns out to be a dead end, other than a handful of narrow use-cases. Automated cars will probably still be right-around-the-corner, with companies producing cool demos regularly but nobody really able to handle the long tail. People will stop spending large amounts on large models and datasets, though models will still grow slowly as compute & data get cheaper.
I was trying hard to do exactly what you recommend doing here, and focus on only the AI-related stuff that seems basically “locked in” at this point and will happen even if no AGI etc. I think +5 OOMs of compute to train AIs by 2040 makes sense in this framework because +2 will come from reduced cost and it’s hard for me to imagine no one spending a billion dollars on an AI training run by 2040. I guess that could happen if there’s an AI winter, but that would be a trend-busting event… Anyhow, it seems like spending & self-driving-cars are the two cases where we disagree? You think they are more closely connected to AGI than I did, such that conditionalizing on AGI not happening means those things don’t happen either? Would you then agree e.g. that in 2025 we have self-driving cars, or billion-dollar models, you’d be like “well fuck AGI is near?” (Or maybe you already have short timelines?)
You think they are more closely connected to AGI than I did, such that conditionalizing on AGI not happening means those things don’t happen either? Would you then agree e.g. that in 2025 we have self-driving cars, or billion-dollar models, you’d be like “well fuck AGI is near?”
Self-driving cars would definitely update me significantly toward shorter timelines. Billion-dollar models are more a downstream thing—i.e. people spending billions on training models is more a measure of how close AGI is widely perceived to be than a measure of how close it actually is. So upon seeing billion-dollar models, I don’t think I’d update much, because I’d already have updated on the things which made someone spend a billion dollars on a model (which may or may not actually be strong evidence for AGI being close).
In this world, I’d also expect that models are not a dramatic energy consumer (contra your #6), mainly because nobody wants to spend that much on them. I’d also expect chatbots to not have dramatically more usage than today (contra your #7) - it will still mostly be obvious when you’re talking to a chatbot, and this will mostly be considered a low-status/low-quality substitute for talking to a human, and still only usable commercially for interactions in a very controlled environment (so e.g. no interactions where complicated or free-form data collection is needed). In other words, chatbot use-cases will generally be pretty similar to today’s, though bot quality will be higher. Similar story with predictive tools—use-cases similar to today, limitations similar to today, but generally somewhat better.
I would expect a lot of chat bot usecases to be a mix of humans and bots. The bot can autogenerated text and then a human can check whether that’s correct which takes less time then the human writing everything themselves.
Interesting. I think what you are saying is pretty plausible… it’s hard for me to reason about this stuff since I’m conditionalizing on something I don’t expect to happen (no singularity by 2040).
The predictions about AI-adjacent things seem weird when we condition on AGI not taking off by 2040. Conditional on that, it seems like the most likely world is one where the current scaling trends play out on the current problems, but current methods turned out to not generalize very well to most real-world problems (especially problems without readily-available giant data sets, or problems in non-controlled environments). In other words, this turns out pretty similar to previous AI/ML booms: a new class of problems is solved, but that class is limited, and we go into another AI winter afterwards.
In that world, I’d expect deep learning to be used commercially for things which we’re already close to: procedural generation of graphics for games and maybe some movies, auto-generation of low-quality written works (for use-cases which don’t involve readers paying close attention) or derivative works (like translations or summaries), that sort of thing. In most cases, it probably won’t be end-to-end ML, just tools for particular steps. Prompt programming mostly turns out to be a dead end, other than a handful of narrow use-cases. Automated cars will probably still be right-around-the-corner, with companies producing cool demos regularly but nobody really able to handle the long tail. People will stop spending large amounts on large models and datasets, though models will still grow slowly as compute & data get cheaper.
I was trying hard to do exactly what you recommend doing here, and focus on only the AI-related stuff that seems basically “locked in” at this point and will happen even if no AGI etc. I think +5 OOMs of compute to train AIs by 2040 makes sense in this framework because +2 will come from reduced cost and it’s hard for me to imagine no one spending a billion dollars on an AI training run by 2040. I guess that could happen if there’s an AI winter, but that would be a trend-busting event… Anyhow, it seems like spending & self-driving-cars are the two cases where we disagree? You think they are more closely connected to AGI than I did, such that conditionalizing on AGI not happening means those things don’t happen either? Would you then agree e.g. that in 2025 we have self-driving cars, or billion-dollar models, you’d be like “well fuck AGI is near?” (Or maybe you already have short timelines?)
Self-driving cars would definitely update me significantly toward shorter timelines. Billion-dollar models are more a downstream thing—i.e. people spending billions on training models is more a measure of how close AGI is widely perceived to be than a measure of how close it actually is. So upon seeing billion-dollar models, I don’t think I’d update much, because I’d already have updated on the things which made someone spend a billion dollars on a model (which may or may not actually be strong evidence for AGI being close).
In this world, I’d also expect that models are not a dramatic energy consumer (contra your #6), mainly because nobody wants to spend that much on them. I’d also expect chatbots to not have dramatically more usage than today (contra your #7) - it will still mostly be obvious when you’re talking to a chatbot, and this will mostly be considered a low-status/low-quality substitute for talking to a human, and still only usable commercially for interactions in a very controlled environment (so e.g. no interactions where complicated or free-form data collection is needed). In other words, chatbot use-cases will generally be pretty similar to today’s, though bot quality will be higher. Similar story with predictive tools—use-cases similar to today, limitations similar to today, but generally somewhat better.
I would expect a lot of chat bot usecases to be a mix of humans and bots. The bot can autogenerated text and then a human can check whether that’s correct which takes less time then the human writing everything themselves.
Interesting. I think what you are saying is pretty plausible… it’s hard for me to reason about this stuff since I’m conditionalizing on something I don’t expect to happen (no singularity by 2040).