The idea that pace of discovery slowed down in the 20th century is a parenthetical digression, and has no bearing on the analysis in this post.
It seemed vaguely related to your exps and logs.
What measures are less silly?
There are many locally valid measures, but all become ridiculous when applies to wrong times. It seems to me that GDP/capita is the least bad measure at the moment, but it’s very likely it won’t do too far in the past or too far in the future.
I don’t quite follow the whole thing (too many Big Os and exponents for me to track the whole thing), but wouldn’t it be quite relevant given your observations about S-curves in the development of microbes?
What’s to stop us from saying that science has hits its S-curve’s peak of how much it could extract from the data and that an AI would be similarly hobbled, especially if we bring in statistical studies like Charles Murray’s _Human Accomplishment_ which argues that up to 1950, long enough ago that recency effects ought to be gone, major scientific discoveries show a decline from peaks in the 1800s or whenever? (Or that mammalian intelligences have largely exhausted the gains?)
Eliezer may talk about how awesome a Solomonoff-inducting intelligence would be and writes stories about how much weak superintelligences could learn, but that’s still extrapolation which could easily fail (eg. we know the limits on maximum velocity and have relatively good ideas how one could get near the speed of light, but we’re not very far from where we began, even with awesome machines).
I see what you’re saying. That would lead to a more complicated analysis, which I’m not going to do, since people here don’t find this approach interesting.
It seemed vaguely related to your exps and logs.
There are many locally valid measures, but all become ridiculous when applies to wrong times. It seems to me that GDP/capita is the least bad measure at the moment, but it’s very likely it won’t do too far in the past or too far in the future.
I have no idea what Kurzweil is doing.
It is related, which is why I mentioned it. But it isn’t a link in the chain of reasoning.
I don’t quite follow the whole thing (too many Big Os and exponents for me to track the whole thing), but wouldn’t it be quite relevant given your observations about S-curves in the development of microbes?
What’s to stop us from saying that science has hits its S-curve’s peak of how much it could extract from the data and that an AI would be similarly hobbled, especially if we bring in statistical studies like Charles Murray’s _Human Accomplishment_ which argues that up to 1950, long enough ago that recency effects ought to be gone, major scientific discoveries show a decline from peaks in the 1800s or whenever? (Or that mammalian intelligences have largely exhausted the gains?)
Eliezer may talk about how awesome a Solomonoff-inducting intelligence would be and writes stories about how much weak superintelligences could learn, but that’s still extrapolation which could easily fail (eg. we know the limits on maximum velocity and have relatively good ideas how one could get near the speed of light, but we’re not very far from where we began, even with awesome machines).
I see what you’re saying. That would lead to a more complicated analysis, which I’m not going to do, since people here don’t find this approach interesting.
If an idea is important and interesting to you, then I think that’s enough justification. The post isn’t negative, after all.