I was trying hard to do exactly what you recommend doing here, and focus on only the AI-related stuff that seems basically “locked in” at this point and will happen even if no AGI etc. I think +5 OOMs of compute to train AIs by 2040 makes sense in this framework because +2 will come from reduced cost and it’s hard for me to imagine no one spending a billion dollars on an AI training run by 2040. I guess that could happen if there’s an AI winter, but that would be a trend-busting event… Anyhow, it seems like spending & self-driving-cars are the two cases where we disagree? You think they are more closely connected to AGI than I did, such that conditionalizing on AGI not happening means those things don’t happen either? Would you then agree e.g. that in 2025 we have self-driving cars, or billion-dollar models, you’d be like “well fuck AGI is near?” (Or maybe you already have short timelines?)
You think they are more closely connected to AGI than I did, such that conditionalizing on AGI not happening means those things don’t happen either? Would you then agree e.g. that in 2025 we have self-driving cars, or billion-dollar models, you’d be like “well fuck AGI is near?”
Self-driving cars would definitely update me significantly toward shorter timelines. Billion-dollar models are more a downstream thing—i.e. people spending billions on training models is more a measure of how close AGI is widely perceived to be than a measure of how close it actually is. So upon seeing billion-dollar models, I don’t think I’d update much, because I’d already have updated on the things which made someone spend a billion dollars on a model (which may or may not actually be strong evidence for AGI being close).
In this world, I’d also expect that models are not a dramatic energy consumer (contra your #6), mainly because nobody wants to spend that much on them. I’d also expect chatbots to not have dramatically more usage than today (contra your #7) - it will still mostly be obvious when you’re talking to a chatbot, and this will mostly be considered a low-status/low-quality substitute for talking to a human, and still only usable commercially for interactions in a very controlled environment (so e.g. no interactions where complicated or free-form data collection is needed). In other words, chatbot use-cases will generally be pretty similar to today’s, though bot quality will be higher. Similar story with predictive tools—use-cases similar to today, limitations similar to today, but generally somewhat better.
I would expect a lot of chat bot usecases to be a mix of humans and bots. The bot can autogenerated text and then a human can check whether that’s correct which takes less time then the human writing everything themselves.
Interesting. I think what you are saying is pretty plausible… it’s hard for me to reason about this stuff since I’m conditionalizing on something I don’t expect to happen (no singularity by 2040).
I was trying hard to do exactly what you recommend doing here, and focus on only the AI-related stuff that seems basically “locked in” at this point and will happen even if no AGI etc. I think +5 OOMs of compute to train AIs by 2040 makes sense in this framework because +2 will come from reduced cost and it’s hard for me to imagine no one spending a billion dollars on an AI training run by 2040. I guess that could happen if there’s an AI winter, but that would be a trend-busting event… Anyhow, it seems like spending & self-driving-cars are the two cases where we disagree? You think they are more closely connected to AGI than I did, such that conditionalizing on AGI not happening means those things don’t happen either? Would you then agree e.g. that in 2025 we have self-driving cars, or billion-dollar models, you’d be like “well fuck AGI is near?” (Or maybe you already have short timelines?)
Self-driving cars would definitely update me significantly toward shorter timelines. Billion-dollar models are more a downstream thing—i.e. people spending billions on training models is more a measure of how close AGI is widely perceived to be than a measure of how close it actually is. So upon seeing billion-dollar models, I don’t think I’d update much, because I’d already have updated on the things which made someone spend a billion dollars on a model (which may or may not actually be strong evidence for AGI being close).
In this world, I’d also expect that models are not a dramatic energy consumer (contra your #6), mainly because nobody wants to spend that much on them. I’d also expect chatbots to not have dramatically more usage than today (contra your #7) - it will still mostly be obvious when you’re talking to a chatbot, and this will mostly be considered a low-status/low-quality substitute for talking to a human, and still only usable commercially for interactions in a very controlled environment (so e.g. no interactions where complicated or free-form data collection is needed). In other words, chatbot use-cases will generally be pretty similar to today’s, though bot quality will be higher. Similar story with predictive tools—use-cases similar to today, limitations similar to today, but generally somewhat better.
I would expect a lot of chat bot usecases to be a mix of humans and bots. The bot can autogenerated text and then a human can check whether that’s correct which takes less time then the human writing everything themselves.
Interesting. I think what you are saying is pretty plausible… it’s hard for me to reason about this stuff since I’m conditionalizing on something I don’t expect to happen (no singularity by 2040).