Now we’re mostly talking about whether a $10 trillion company can explosively grow to $300 trillion as it develops AI, which is just not the same game in any qualitative sense.
To be clear, this is not the scenario that I worry about, and neither is it the scenario most other people I talk about AI Alignment tend to worry about. I recognize there is disagreement within the AI Alignment community here, but this sentence sounds like it’s some kind of consensus, when I think it clearly isn’t. I don’t expect we will ever see a $300 trillion company before humanity goes extinct.
I’m just using $300 trillion as a proxy for “as big as the world.” The point is that we’re now mostly talking about Google building TAI with relatively large budgets.
It’s not yet settled (since of course none of the bets are settled). But current projects are fairly big, the current trend is to grow quite quickly, and current techniques have massive returns to scale. So the wind certainly seems to be blowing in that direction about as hard as it could.
Well, $300 trillion seems like it assumes that offense is about similarly hard to defense, in this analogy. Russia launching a nuclear attack on the U.S. and this somehow chaining into a nuclear winter that causes civilizational collapse, does not imply that Russia has “grown to $300 trillion”. Similarly, an AI developing a bioweapon that destroys humanity’s ability to coordinate or orient and kills 99% of the population using like $5000, and then rebuild over the course of a few years without humans around, also doesn’t look at all like “explosive growth to $300 trillion”.
This seems important since you are saying that “[this] is just not the same game in any qualitative sense”, whereas I feel like something like the scenario above seems most likely, we haven’t seen much evidence to suggest it’s not what’s going to happen, and sounds quite similar to what Eliezer was talking about at the time. Like, I think de-facto probably an AI won’t do an early strike like this that only kills 99% of the population, and will instead wait for longer to make sure it can do something that has less of a chance of failure, but the point-of-no-return will have been crossed when a system first had the capability to kill approximately everyone.
It’s not yet settled (since of course none of the bets are settled). But current projects are fairly big, the current trend is to grow quite quickly, and current techniques have massive returns to scale. So the wind certainly seems to be blowing in that direction about as hard as it could.
I agree with this. I agree that it seems likely that model sizes will continue going up, and that cutting-edge performance will probably require at least on the order of $100M in a few years, though it’s not fully clear how much of that money is going to be wasted, and how much a team could reproduce the cutting-edge results without access to the full $100M. I do think in as much as it will come true, this does make me more optimistic that cutting edge capabilities will at least have like 3 years of lead, before a 10 person team could reproduce something for a tenth of the cost (which my guess is probably currently roughly what happened historically?).
Eliezer very specifically talks about AI systems that “go foom,” after which they are so much better at R&D than the rest of the world that they can very rapidly build molecular nanotechnology, and then build more stuff than the rest of the world put together.
This isn’t related to offense vs defense, that’s just >$300 trillion of output conventionally-measured. We’re not talking about random terrorists who find a way to cause harm, we are talking about the entire process of (what we used to call) economic growth now occurring inside a lab in fast motion.
I think he lays this all out pretty explicitly. And for what it’s worth I think that’s the correct implication of the other parts of Eliezer’s view. That is what would happen if you had a broadly human-level AI with nothing of the sort anywhere else. (Though I also agree that maybe there’d be a war or decisive first strike first, it’s a crazy world we’re talking about.)
And I think in many ways that’s quite to what will happen. It just seems most likely to take years instead of months, to use huge amounts of compute (and therefore share proceeds with compute providers and a bunch of the rest of the economy), to result in “AI improvements” that look much more similar to conventional human R&D, and so on.
To be clear, this is not the scenario that I worry about, and neither is it the scenario most other people I talk about AI Alignment tend to worry about. I recognize there is disagreement within the AI Alignment community here, but this sentence sounds like it’s some kind of consensus, when I think it clearly isn’t. I don’t expect we will ever see a $300 trillion company before humanity goes extinct.
I’m just using $300 trillion as a proxy for “as big as the world.” The point is that we’re now mostly talking about Google building TAI with relatively large budgets.
It’s not yet settled (since of course none of the bets are settled). But current projects are fairly big, the current trend is to grow quite quickly, and current techniques have massive returns to scale. So the wind certainly seems to be blowing in that direction about as hard as it could.
Well, $300 trillion seems like it assumes that offense is about similarly hard to defense, in this analogy. Russia launching a nuclear attack on the U.S. and this somehow chaining into a nuclear winter that causes civilizational collapse, does not imply that Russia has “grown to $300 trillion”. Similarly, an AI developing a bioweapon that destroys humanity’s ability to coordinate or orient and kills 99% of the population using like $5000, and then rebuild over the course of a few years without humans around, also doesn’t look at all like “explosive growth to $300 trillion”.
This seems important since you are saying that “[this] is just not the same game in any qualitative sense”, whereas I feel like something like the scenario above seems most likely, we haven’t seen much evidence to suggest it’s not what’s going to happen, and sounds quite similar to what Eliezer was talking about at the time. Like, I think de-facto probably an AI won’t do an early strike like this that only kills 99% of the population, and will instead wait for longer to make sure it can do something that has less of a chance of failure, but the point-of-no-return will have been crossed when a system first had the capability to kill approximately everyone.
I agree with this. I agree that it seems likely that model sizes will continue going up, and that cutting-edge performance will probably require at least on the order of $100M in a few years, though it’s not fully clear how much of that money is going to be wasted, and how much a team could reproduce the cutting-edge results without access to the full $100M. I do think in as much as it will come true, this does make me more optimistic that cutting edge capabilities will at least have like 3 years of lead, before a 10 person team could reproduce something for a tenth of the cost (which my guess is probably currently roughly what happened historically?).
Eliezer very specifically talks about AI systems that “go foom,” after which they are so much better at R&D than the rest of the world that they can very rapidly build molecular nanotechnology, and then build more stuff than the rest of the world put together.
This isn’t related to offense vs defense, that’s just >$300 trillion of output conventionally-measured. We’re not talking about random terrorists who find a way to cause harm, we are talking about the entire process of (what we used to call) economic growth now occurring inside a lab in fast motion.
I think he lays this all out pretty explicitly. And for what it’s worth I think that’s the correct implication of the other parts of Eliezer’s view. That is what would happen if you had a broadly human-level AI with nothing of the sort anywhere else. (Though I also agree that maybe there’d be a war or decisive first strike first, it’s a crazy world we’re talking about.)
And I think in many ways that’s quite to what will happen. It just seems most likely to take years instead of months, to use huge amounts of compute (and therefore share proceeds with compute providers and a bunch of the rest of the economy), to result in “AI improvements” that look much more similar to conventional human R&D, and so on.