The idea that pace of discovery slowed down is an extremely common and really obvious fallacy.
We only know that discovery was important after it gets widely implemented, what happens decades after invention. Yet, we count it as happening not at implementation time, but at invention time. So recent discoveries that will be implemented in the future are not counted at all, artificially lowering our discovery importance counts.
Also if you use silly measures like railroad tracks per person, or max land mph, you will obviously not see much progress, as large part of the progress is exploring new kinds of activities, not just making old activities more efficient. Any constant criterion like that will underestimate progress.
The idea that pace of discovery slowed down is an extremely common and really obvious fallacy.
The idea can’t be a fallacy. What you mean is that the usual argument for this idea contains an obvious fallacy.
It is an important distinction because reversed stupidity is not intelligence. Identifying the fallacy doesn’t prove that the pace of discovery has not slowed.
The idea that pace of discovery slowed down in the 20th century is a parenthetical digression, and has no bearing on the analysis in this post.
Also if you use silly measures like railroad tracks per person, or max land mph, you will obviously not see much progress, as large part of the progress is exploring new kinds of activities, not just making old activities more efficient. Any constant criterion like that will underestimate progress.
But it’s okay when Ray Kurzweil does it? He is underestimating progress by doing so? What measures are less silly?
The idea that pace of discovery slowed down in the 20th century is a parenthetical digression, and has no bearing on the analysis in this post.
It seemed vaguely related to your exps and logs.
What measures are less silly?
There are many locally valid measures, but all become ridiculous when applies to wrong times. It seems to me that GDP/capita is the least bad measure at the moment, but it’s very likely it won’t do too far in the past or too far in the future.
I don’t quite follow the whole thing (too many Big Os and exponents for me to track the whole thing), but wouldn’t it be quite relevant given your observations about S-curves in the development of microbes?
What’s to stop us from saying that science has hits its S-curve’s peak of how much it could extract from the data and that an AI would be similarly hobbled, especially if we bring in statistical studies like Charles Murray’s _Human Accomplishment_ which argues that up to 1950, long enough ago that recency effects ought to be gone, major scientific discoveries show a decline from peaks in the 1800s or whenever? (Or that mammalian intelligences have largely exhausted the gains?)
Eliezer may talk about how awesome a Solomonoff-inducting intelligence would be and writes stories about how much weak superintelligences could learn, but that’s still extrapolation which could easily fail (eg. we know the limits on maximum velocity and have relatively good ideas how one could get near the speed of light, but we’re not very far from where we began, even with awesome machines).
I see what you’re saying. That would lead to a more complicated analysis, which I’m not going to do, since people here don’t find this approach interesting.
I don’t think there is any consensus on how to measure innovation. So, before dealing with the question, one must first be clear about what form of measurement you are using—otherwise nobody will know what you aare talking about.
The idea that pace of discovery slowed down is an extremely common and really obvious fallacy.
We only know that discovery was important after it gets widely implemented, what happens decades after invention. Yet, we count it as happening not at implementation time, but at invention time. So recent discoveries that will be implemented in the future are not counted at all, artificially lowering our discovery importance counts.
Also if you use silly measures like railroad tracks per person, or max land mph, you will obviously not see much progress, as large part of the progress is exploring new kinds of activities, not just making old activities more efficient. Any constant criterion like that will underestimate progress.
The idea can’t be a fallacy. What you mean is that the usual argument for this idea contains an obvious fallacy.
It is an important distinction because reversed stupidity is not intelligence. Identifying the fallacy doesn’t prove that the pace of discovery has not slowed.
The idea that pace of discovery slowed down in the 20th century is a parenthetical digression, and has no bearing on the analysis in this post.
But it’s okay when Ray Kurzweil does it? He is underestimating progress by doing so? What measures are less silly?
It seemed vaguely related to your exps and logs.
There are many locally valid measures, but all become ridiculous when applies to wrong times. It seems to me that GDP/capita is the least bad measure at the moment, but it’s very likely it won’t do too far in the past or too far in the future.
I have no idea what Kurzweil is doing.
It is related, which is why I mentioned it. But it isn’t a link in the chain of reasoning.
I don’t quite follow the whole thing (too many Big Os and exponents for me to track the whole thing), but wouldn’t it be quite relevant given your observations about S-curves in the development of microbes?
What’s to stop us from saying that science has hits its S-curve’s peak of how much it could extract from the data and that an AI would be similarly hobbled, especially if we bring in statistical studies like Charles Murray’s _Human Accomplishment_ which argues that up to 1950, long enough ago that recency effects ought to be gone, major scientific discoveries show a decline from peaks in the 1800s or whenever? (Or that mammalian intelligences have largely exhausted the gains?)
Eliezer may talk about how awesome a Solomonoff-inducting intelligence would be and writes stories about how much weak superintelligences could learn, but that’s still extrapolation which could easily fail (eg. we know the limits on maximum velocity and have relatively good ideas how one could get near the speed of light, but we’re not very far from where we began, even with awesome machines).
I see what you’re saying. That would lead to a more complicated analysis, which I’m not going to do, since people here don’t find this approach interesting.
If an idea is important and interesting to you, then I think that’s enough justification. The post isn’t negative, after all.
I don’t think there is any consensus on how to measure innovation. So, before dealing with the question, one must first be clear about what form of measurement you are using—otherwise nobody will know what you aare talking about.