I don’t quite follow the whole thing (too many Big Os and exponents for me to track the whole thing), but wouldn’t it be quite relevant given your observations about S-curves in the development of microbes?
What’s to stop us from saying that science has hits its S-curve’s peak of how much it could extract from the data and that an AI would be similarly hobbled, especially if we bring in statistical studies like Charles Murray’s _Human Accomplishment_ which argues that up to 1950, long enough ago that recency effects ought to be gone, major scientific discoveries show a decline from peaks in the 1800s or whenever? (Or that mammalian intelligences have largely exhausted the gains?)
Eliezer may talk about how awesome a Solomonoff-inducting intelligence would be and writes stories about how much weak superintelligences could learn, but that’s still extrapolation which could easily fail (eg. we know the limits on maximum velocity and have relatively good ideas how one could get near the speed of light, but we’re not very far from where we began, even with awesome machines).
I see what you’re saying. That would lead to a more complicated analysis, which I’m not going to do, since people here don’t find this approach interesting.
I don’t quite follow the whole thing (too many Big Os and exponents for me to track the whole thing), but wouldn’t it be quite relevant given your observations about S-curves in the development of microbes?
What’s to stop us from saying that science has hits its S-curve’s peak of how much it could extract from the data and that an AI would be similarly hobbled, especially if we bring in statistical studies like Charles Murray’s _Human Accomplishment_ which argues that up to 1950, long enough ago that recency effects ought to be gone, major scientific discoveries show a decline from peaks in the 1800s or whenever? (Or that mammalian intelligences have largely exhausted the gains?)
Eliezer may talk about how awesome a Solomonoff-inducting intelligence would be and writes stories about how much weak superintelligences could learn, but that’s still extrapolation which could easily fail (eg. we know the limits on maximum velocity and have relatively good ideas how one could get near the speed of light, but we’re not very far from where we began, even with awesome machines).
I see what you’re saying. That would lead to a more complicated analysis, which I’m not going to do, since people here don’t find this approach interesting.
If an idea is important and interesting to you, then I think that’s enough justification. The post isn’t negative, after all.