--I agree that for the brute-force evolution approach, we should have a gaussian around where we’d expect that to land. My “Let’s just do evenly across all the OOMs between now and evolution” is only a reasonable first-pass approach to what our all-things-considered distribution should be like, including evolution but also various other strategies. (Even better would be having a taxonomy of the various strategies and a gaussian for each; this is sorta what Ajeya does. the problem is that insofar as you don’t trust your taxonomy to be exhaustive, the resulting distribution is untrustworthy as well.) I think it’s reasonable to extend the probability mass down to where we are now, because we are currently at the HBHL milestone pretty much, which seems like a pretty relevant milestone to say the least.
If you’re in the position where searching 1000 HBHL hypotheses finds TAI, then the implicit assumption is that model scaling has already substituted for the majority of AI special sauce, and the remaining search is just an enabler for figuring out the few remaining details. That or that there wasn’t much special sauce in the first place.
This seems right to me.
To maybe make my framing a bit more transparent, consider the example of a company trying to build useful, self-replicating nanoscale robots using a atomically precise 3D printer under the conditions where 1) nobody there has a good idea of how to go about doing this, and 2) you have 1000 tries.
I like this analogy. I think our intuitions about how hard it would be might differ though. Also, our intuitions about the extent to which nobody has a good idea of how to make TAI might differ too.
Also, our intuitions about the extent to which nobody has a good idea of how to make TAI might differ too.
To be clear I’m not saying nobody has a good idea of how to make TAI. I expect pretty short timelines, because I expect the remaining fundamental challenges aren’t very big.
What I don’t expect is that the remaining fundamental challenges go away through small-N search over large architectures, if the special sauce does turn out to be significant.
Sorry I didn’t see this until now!
--I agree that for the brute-force evolution approach, we should have a gaussian around where we’d expect that to land. My “Let’s just do evenly across all the OOMs between now and evolution” is only a reasonable first-pass approach to what our all-things-considered distribution should be like, including evolution but also various other strategies. (Even better would be having a taxonomy of the various strategies and a gaussian for each; this is sorta what Ajeya does. the problem is that insofar as you don’t trust your taxonomy to be exhaustive, the resulting distribution is untrustworthy as well.) I think it’s reasonable to extend the probability mass down to where we are now, because we are currently at the HBHL milestone pretty much, which seems like a pretty relevant milestone to say the least.
This seems right to me.
I like this analogy. I think our intuitions about how hard it would be might differ though. Also, our intuitions about the extent to which nobody has a good idea of how to make TAI might differ too.
To be clear I’m not saying nobody has a good idea of how to make TAI. I expect pretty short timelines, because I expect the remaining fundamental challenges aren’t very big.
What I don’t expect is that the remaining fundamental challenges go away through small-N search over large architectures, if the special sauce does turn out to be significant.