I’d say 70% for TAI in 5 years if you gave +12 OOM.
I think the single biggest uncertainty is about whether we will be able to adapt sufficiently quickly to the new larger compute budgets (i.e. how much do we need to change algorithms to scale reasonably? it’s a very unusual situation and it’s hard to scale up fast and depends on exactly how far that goes). Maybe I think that there’s an 90% chance that TAI is in some sense possible (maybe: if you’d gotten to that much compute while remaining as well-adapted as we are now to our current levels of compute) and conditioned on that an 80% chance that we’ll actually do it vs running into problems?
(Didn’t think about it too much, don’t hold me to it too much. Also I’m not exactly sure what your counterfactual is and didn’t read the original post in detail, I was just assuming that all existing and future hardware got 12OOM faster. If I gave numbers somewhere else that imply much less than that probability with +12OOM, then you should be skeptical of both.)
My counterfactual attempts to get at the question “Holding ideas constant, how much would we need to increase compute until we’d have enough to build TAI/AGI/etc. in a few years?” This is (I think) what Ajeya is talking about with her timelines framework. Her median is +12 OOMs. I think +12 OOMs is much more than 50% likely to be enough; I think it’s more like 80% and that’s after having talked to a bunch of skeptics, attempted to account for unknown unknowns, etc. She mentioned to me that 80% seems plausible to her too but that she’s trying to adjust downwards to account for biases, unknown unknowns, etc.
Given that, am I right in thinking that your answer is really close to 90%, since failure-to-achieve-TAI/AGI/etc-due-to-being-unable-to-adapt-quickly-to-magically-increased-compute “shouldn’t count” for purposes of this thought experiment?
I’d say 70% for TAI in 5 years if you gave +12 OOM.
I think the single biggest uncertainty is about whether we will be able to adapt sufficiently quickly to the new larger compute budgets (i.e. how much do we need to change algorithms to scale reasonably? it’s a very unusual situation and it’s hard to scale up fast and depends on exactly how far that goes). Maybe I think that there’s an 90% chance that TAI is in some sense possible (maybe: if you’d gotten to that much compute while remaining as well-adapted as we are now to our current levels of compute) and conditioned on that an 80% chance that we’ll actually do it vs running into problems?
(Didn’t think about it too much, don’t hold me to it too much. Also I’m not exactly sure what your counterfactual is and didn’t read the original post in detail, I was just assuming that all existing and future hardware got 12OOM faster. If I gave numbers somewhere else that imply much less than that probability with +12OOM, then you should be skeptical of both.)
My counterfactual attempts to get at the question “Holding ideas constant, how much would we need to increase compute until we’d have enough to build TAI/AGI/etc. in a few years?” This is (I think) what Ajeya is talking about with her timelines framework. Her median is +12 OOMs. I think +12 OOMs is much more than 50% likely to be enough; I think it’s more like 80% and that’s after having talked to a bunch of skeptics, attempted to account for unknown unknowns, etc. She mentioned to me that 80% seems plausible to her too but that she’s trying to adjust downwards to account for biases, unknown unknowns, etc.
Given that, am I right in thinking that your answer is really close to 90%, since failure-to-achieve-TAI/AGI/etc-due-to-being-unable-to-adapt-quickly-to-magically-increased-compute “shouldn’t count” for purposes of this thought experiment?