It is irrelevant to this post, because this post is about what our probability distribution over orders of magnitude of compute should be like. Once we have said distribution, then we can ask: How quickly (in clock time) will we progress through the distribution / explore more OOMs of compute? Then the AI and compute trend, and the update to it, become relevant.
But not super relevant IMO. The AI and Compute trend was way too fast to be sustained, people at the time even said so. This recent halt in the trend is not surprising. What matters is what the trend will look like going forward, e.g. over the next 10 years, over the next 20 years, etc.
It can be broken down into two components: Cost reduction and spending increase.
Ajeya separately estimates each component for the near term (5 years) and for the long-term trend beyond.
I mostly defer to her judgment on this, with large uncertainty. (Ajeya thinks costs will halve every 2.5 years, which is slower than the 1.5 years average throughout all history, but justifiable given how Moore’s Law is said to be dying now. As for spending increases, she thinks it will take decades to ramp up to trillion-dollar expenditures, whereas I am more uncertain and think it could maybe happen by 2030 idk.)
I feel quite confident in the following claim: Conditional on +6 OOMs being enough with 2020′s ideas, it’ll happen by 2030. Indeed, conditional on +8 OOMs being enough with 2020′s ideas, I think it’ll probably happen by 2030. If you are interested in more of my arguments for this stuff, I have some slides I could share slash I’d love to chat with you about this! :)
If the AI and compute trend is just a blip, then doesn’t that return us to the previous trend line in the graph you show at the beginning, where we progress about 2 ooms a decade? (More accurately, 1 oom every 6-7 years, or, 8 ooms in 5 decades.)
Ignoring AI and compute, then: if we believe +12 ooms in 2016 means great danger in 2020, we should believe that roughly 75 years after 2016, we are at most four years from the danger zone.
Whereas, if we extrapolate the AI-and-compute trend, +12 ooms is like jumping 12 years in the future; so the idea of risk by 2030 makes sense.
So I don’t get how your conclusion can be so independent of AI-and-compute.
Sorry, somehow I missed this. Basically, the answer is that we definitely shouldn’t just extrapolate out the AI and compute trend into the future, and Ajeya’s and my predictions are not doing that. Instead we are assuming something more like the historic 2 ooms a decade trend, combined with some amount of increased spending conditional on us being close to AGI/TAI/etc. Hence my conditional claim above:
Conditional on +6 OOMs being enough with 2020′s ideas, it’ll happen by 2030. Indeed, conditional on +8 OOMs being enough with 2020′s ideas, I think it’ll probably happen by 2030.
If you want to discuss this more with me, I’d love to, how bout we book a call?
So, how does the update to the AI and compute trend factor in?
It is irrelevant to this post, because this post is about what our probability distribution over orders of magnitude of compute should be like. Once we have said distribution, then we can ask: How quickly (in clock time) will we progress through the distribution / explore more OOMs of compute? Then the AI and compute trend, and the update to it, become relevant.
But not super relevant IMO. The AI and Compute trend was way too fast to be sustained, people at the time even said so. This recent halt in the trend is not surprising. What matters is what the trend will look like going forward, e.g. over the next 10 years, over the next 20 years, etc.
It can be broken down into two components: Cost reduction and spending increase.
Ajeya separately estimates each component for the near term (5 years) and for the long-term trend beyond.
I mostly defer to her judgment on this, with large uncertainty. (Ajeya thinks costs will halve every 2.5 years, which is slower than the 1.5 years average throughout all history, but justifiable given how Moore’s Law is said to be dying now. As for spending increases, she thinks it will take decades to ramp up to trillion-dollar expenditures, whereas I am more uncertain and think it could maybe happen by 2030 idk.)
I feel quite confident in the following claim: Conditional on +6 OOMs being enough with 2020′s ideas, it’ll happen by 2030. Indeed, conditional on +8 OOMs being enough with 2020′s ideas, I think it’ll probably happen by 2030. If you are interested in more of my arguments for this stuff, I have some slides I could share slash I’d love to chat with you about this! :)
If the AI and compute trend is just a blip, then doesn’t that return us to the previous trend line in the graph you show at the beginning, where we progress about 2 ooms a decade? (More accurately, 1 oom every 6-7 years, or, 8 ooms in 5 decades.)
Ignoring AI and compute, then: if we believe +12 ooms in 2016 means great danger in 2020, we should believe that roughly 75 years after 2016, we are at most four years from the danger zone.
Whereas, if we extrapolate the AI-and-compute trend, +12 ooms is like jumping 12 years in the future; so the idea of risk by 2030 makes sense.
So I don’t get how your conclusion can be so independent of AI-and-compute.
Sorry, somehow I missed this. Basically, the answer is that we definitely shouldn’t just extrapolate out the AI and compute trend into the future, and Ajeya’s and my predictions are not doing that. Instead we are assuming something more like the historic 2 ooms a decade trend, combined with some amount of increased spending conditional on us being close to AGI/TAI/etc. Hence my conditional claim above:
If you want to discuss this more with me, I’d love to, how bout we book a call?