Eliezer’s stated reason, as I understand it, is that evolution’s work to increase the performance of the human brain did not suffer diminishing returns on the path from roughly chimpanzee brains to current human brains. Actually, there was probably a slightly greater-than-linear increase in human intelligence per unit of evolutionary time.
If we also assume that evolution did not have an increasing optimization pressure which could account for the nonlinear trend (which might be an assumption worth exploring; I believe Tim Tyler would deny this), then this suggests that the slope of ‘intelligence per optimization pressure applied’ is steep around the level of human intelligence, from the perspective of a process improving an intelligent entity. I am not sure this translates perfectly into your formulation using x’s and y’s, but I think it is a sufficiently illustrative answer to your question. It is not a very concrete reason to believe Eliezer’s conclusion, but it is suggestive.
Two objections to this: Firstly you have to extrapolate from the chimp-to-human range and into superintelligence range. The gradient may not be the same in the two. Second, it seems to me that the more intelligent humans are, the more “the other humans in my tribe” becomes the dominant part of your environment; this leads to increased returns to intelligence, and consequently you do get an increasing optimisation pressure.
To your first objection, I agree that “the gradient may not be the same in the two,” when you are talking about chimp-to-human growth and human-to-superintelligence growth. But Eliezer’s stated reason mostly applies to the areas near human intelligence, as I said. There is no consensus on how far the “steep” area extends, so I think your doubt is justified.
Your second objection also sounds reasonable to me, but I don’t know enough about evolution to confidently endorse or dispute it. To me, this sounds similar to a point that Tim Tyler tries to make repeatedly in this sequence, but I haven’t investigated his views thoroughly. I believe his stance is as follows: since a human selects a mate using their brain, and intelligence is so necessary for human survival, and sexual organisms want to pick fit mates, there has been a nontrivial feedback loop caused by humans using their intelligence to be good at selecting intelligent mates. Do you endorse this? (I am not sure, myself.)
Eliezer’s stated reason, as I understand it, is that evolution’s work to increase the performance of the human brain did not suffer diminishing returns on the path from roughly chimpanzee brains to current human brains. Actually, there was probably a slightly greater-than-linear increase in human intelligence per unit of evolutionary time.
If we also assume that evolution did not have an increasing optimization pressure which could account for the nonlinear trend (which might be an assumption worth exploring; I believe Tim Tyler would deny this), then this suggests that the slope of ‘intelligence per optimization pressure applied’ is steep around the level of human intelligence, from the perspective of a process improving an intelligent entity. I am not sure this translates perfectly into your formulation using x’s and y’s, but I think it is a sufficiently illustrative answer to your question. It is not a very concrete reason to believe Eliezer’s conclusion, but it is suggestive.
Two objections to this: Firstly you have to extrapolate from the chimp-to-human range and into superintelligence range. The gradient may not be the same in the two. Second, it seems to me that the more intelligent humans are, the more “the other humans in my tribe” becomes the dominant part of your environment; this leads to increased returns to intelligence, and consequently you do get an increasing optimisation pressure.
To your first objection, I agree that “the gradient may not be the same in the two,” when you are talking about chimp-to-human growth and human-to-superintelligence growth. But Eliezer’s stated reason mostly applies to the areas near human intelligence, as I said. There is no consensus on how far the “steep” area extends, so I think your doubt is justified.
Your second objection also sounds reasonable to me, but I don’t know enough about evolution to confidently endorse or dispute it. To me, this sounds similar to a point that Tim Tyler tries to make repeatedly in this sequence, but I haven’t investigated his views thoroughly. I believe his stance is as follows: since a human selects a mate using their brain, and intelligence is so necessary for human survival, and sexual organisms want to pick fit mates, there has been a nontrivial feedback loop caused by humans using their intelligence to be good at selecting intelligent mates. Do you endorse this? (I am not sure, myself.)