Sorry it took me so long to reply; this comment slipped off my radar.
The latter scenario is more what I have in mind—powerful AI systems deciding that now’s the time to defect, to join together into a new coalition in which AIs call the shots instead of humans. It sounds silly, but it’s most accurate to describe in classic political terms: Powerful AI systems launch a coup/revolution to overturn the old order and create a new one that is better by their lights.
I agree with your argument about likelihood of DSA being higher compared to previous accelerations, due to society not being able to speed up as fast as the technology. This is sorta what I had in mind with my original argument for DSA; I was thinking that leaks/spying/etc. would not speed up nearly as fast as the relevant AI tech speeds up.
Now I think this will definitely be a factor but it’s unclear whether it’s enough to overcome the automatic slowdown. I do at least feel comfortable predicting that DSA is more likely this time around than it was in the past… probably.
I agree with your argument about likelihood of DSA being higher compared to previous accelerations, due to society not being able to speed up as fast as the technology. This is sorta what I had in mind with my original argument for DSA; I was thinking that leaks/spying/etc. would not speed up nearly as fast as the relevant AI tech speeds up.
Your post on ‘against GDP as a metric’ argues more forcefully for the same thing that I was arguing for, that
‘the economic doubling time’ stops being so meaningful—technological progress speeds up abruptly but other kinds of progress that adapt to tech progress have more of a lag before the increased technological progress also affects them?
So we’re on the same page there that it’s not likely that ‘the economic doubling time’ captures everything that’s going on all that well, which leads to another problem—how do we predict what level of capability is necessary for a transformative AI to obtain a DSA (or reach the PONR for a DSA)?
I notice that in your post you don’t propose an alternative metric to GDP, which is fair enough since most of your arguments seem to lead to the conclusion that it’s almost impossibly difficult to predict in advance what level of advantage over the rest of the world in which areas are actually needed to conquer the world, since we seem to be able to analogize persuasion tools to or conquistador-analogues who had relatively small tech advantages, to the AGI situation.
I think that there is still a useful role for raw economic power measurements, in that they provide a sort of upper bound on how much capability difference is needed to conquer the world. If an AGI acquires resources equivalent to controlling >50% of the world’s entire GDP, it can probably take over the world if it goes for the maximally brute force approach of just using direct military force. Presumably the PONR for that situation would be awhile before then, but at least we know that an advantage of a certain size would be big enough given no assumptions about the effectiveness of unproven technologies of persuasion or manipulation or specific vulnerabilities in human civilization.
So we can use our estimate of how doubling time may increase, anchor on that gap and estimate down based on how soon we think the PONR is, or how many ‘cheat’ pathways that don’t involve economic growth there are.
The whole idea of using brute economic advantage as an upper limit ‘anchor’ I got from Ajeya’s Post about using biological anchors to forecast what’s required for TAI—if we could find a reasonable lower bound for the amount of advantage needed to attain DSA we could do the same kind of estimated distribution between them. We would just need a lower limit—maybe there’s a way of estimating it based on the upper limit of human ability since we know no actually existing human has used persuasion to take over the world but as you point out they’ve come relatively close.
I realize that’s not a great method, but is there any better alternative given that this is a situation we’ve never encountered before, for trying to predict what level of capability is necessary for DSA? Or perhaps you just think that anchoring your prior estimate based on economic power advantage as an upper bound is so misleading it’s worse than having a completely ignorant prior. In that case, we might have to say that there are just so many unprecedented ways that a transformative AI could obtain a DSA that we can just have no idea in advance what capability is needed, which doesn’t feel quite right to me.
I notice that in your post you don’t propose an alternative metric to GDP, which is fair enough since most of your arguments seem to lead to the conclusion that it’s almost impossibly difficult to predict in advance what level of advantage over the rest of the world in which areas are actually needed to conquer the world, since we seem to be able to analogize persuasion tools to or conquistador-analogues who had relatively small tech advantages, to the AGI situation.
I wouldn’t go that far. The reason I didn’t propose an alternative metric to GDP was that I didn’t have a great one in mind and the post was plenty long enough already. I agree that it’s not obvious a good metric exists, but I’m optimistic that we can at least make progress by thinking more. For example, we could start by enumerating different kinds of skills (and combos of skills) that could potentially lead to a PONR if some faction or AIs generally had enough of them relative to everyone else. (I sorta start such a list in the post). Next, we separately consider each skill and come up with a metric for it.
I’m not sure I understand your proposed methodology fully. Are you proposing we do something like Roodman’s model to forecast TAI and then adjust downwards based on how we think PONR could come sooner? I think unfortunately that GWP growth can’t be forecast that accurately, since it depends on AI capabilities increases.
Sorry it took me so long to reply; this comment slipped off my radar.
The latter scenario is more what I have in mind—powerful AI systems deciding that now’s the time to defect, to join together into a new coalition in which AIs call the shots instead of humans. It sounds silly, but it’s most accurate to describe in classic political terms: Powerful AI systems launch a coup/revolution to overturn the old order and create a new one that is better by their lights.
I agree with your argument about likelihood of DSA being higher compared to previous accelerations, due to society not being able to speed up as fast as the technology. This is sorta what I had in mind with my original argument for DSA; I was thinking that leaks/spying/etc. would not speed up nearly as fast as the relevant AI tech speeds up.
Now I think this will definitely be a factor but it’s unclear whether it’s enough to overcome the automatic slowdown. I do at least feel comfortable predicting that DSA is more likely this time around than it was in the past… probably.
Your post on ‘against GDP as a metric’ argues more forcefully for the same thing that I was arguing for, that
So we’re on the same page there that it’s not likely that ‘the economic doubling time’ captures everything that’s going on all that well, which leads to another problem—how do we predict what level of capability is necessary for a transformative AI to obtain a DSA (or reach the PONR for a DSA)?
I notice that in your post you don’t propose an alternative metric to GDP, which is fair enough since most of your arguments seem to lead to the conclusion that it’s almost impossibly difficult to predict in advance what level of advantage over the rest of the world in which areas are actually needed to conquer the world, since we seem to be able to analogize persuasion tools to or conquistador-analogues who had relatively small tech advantages, to the AGI situation.
I think that there is still a useful role for raw economic power measurements, in that they provide a sort of upper bound on how much capability difference is needed to conquer the world. If an AGI acquires resources equivalent to controlling >50% of the world’s entire GDP, it can probably take over the world if it goes for the maximally brute force approach of just using direct military force. Presumably the PONR for that situation would be awhile before then, but at least we know that an advantage of a certain size would be big enough given no assumptions about the effectiveness of unproven technologies of persuasion or manipulation or specific vulnerabilities in human civilization.
So we can use our estimate of how doubling time may increase, anchor on that gap and estimate down based on how soon we think the PONR is, or how many ‘cheat’ pathways that don’t involve economic growth there are.
The whole idea of using brute economic advantage as an upper limit ‘anchor’ I got from Ajeya’s Post about using biological anchors to forecast what’s required for TAI—if we could find a reasonable lower bound for the amount of advantage needed to attain DSA we could do the same kind of estimated distribution between them. We would just need a lower limit—maybe there’s a way of estimating it based on the upper limit of human ability since we know no actually existing human has used persuasion to take over the world but as you point out they’ve come relatively close.
I realize that’s not a great method, but is there any better alternative given that this is a situation we’ve never encountered before, for trying to predict what level of capability is necessary for DSA? Or perhaps you just think that anchoring your prior estimate based on economic power advantage as an upper bound is so misleading it’s worse than having a completely ignorant prior. In that case, we might have to say that there are just so many unprecedented ways that a transformative AI could obtain a DSA that we can just have no idea in advance what capability is needed, which doesn’t feel quite right to me.
I wouldn’t go that far. The reason I didn’t propose an alternative metric to GDP was that I didn’t have a great one in mind and the post was plenty long enough already. I agree that it’s not obvious a good metric exists, but I’m optimistic that we can at least make progress by thinking more. For example, we could start by enumerating different kinds of skills (and combos of skills) that could potentially lead to a PONR if some faction or AIs generally had enough of them relative to everyone else. (I sorta start such a list in the post). Next, we separately consider each skill and come up with a metric for it.
I’m not sure I understand your proposed methodology fully. Are you proposing we do something like Roodman’s model to forecast TAI and then adjust downwards based on how we think PONR could come sooner? I think unfortunately that GWP growth can’t be forecast that accurately, since it depends on AI capabilities increases.