I never argued that intelligence beyond the range accessible by human deviation is impossible, or that differences beyond that range would not be highly determinative, but this is still not the same as increasing marginal returns on intelligence. If an individual had hundreds of trillions of dollars at their disposal, there would be numerous problems that they could resolve that people with fortunes in the mere tens of billions could not, but that doesn’t mean that personal fortunes have increasing marginal returns. It seems to me that you are looking for reasons to object to my comments that are not provided in their content.
but this is still not the same as increasing marginal returns on intelligence.
Half my comment was pointing out why, if there were increasing returns, that was consistent with our observations and supported by non-human examples.
It seems to me that you are looking for reasons to object to my comments that are not provided in their content.
No. I am objecting to your same line of thought that I have been objecting to quite from the start:
The world doesn’t seem to be dominated by super high g people.
To repeat myself: this is empirically false, the domination is as we would expect for both increasing & decreasing marginal returns, and more broadly does not help us in putting anything but a lower bound on future developments such as selected humans or AIs.
I never argued that intelligence beyond the range accessible by human deviation is impossible, or that differences beyond that range would not be highly determinative, but this is still not the same as increasing marginal returns on intelligence. If an individual had hundreds of trillions of dollars at their disposal, there would be numerous problems that they could resolve that people with fortunes in the mere tens of billions could not, but that doesn’t mean that personal fortunes have increasing marginal returns. It seems to me that you are looking for reasons to object to my comments that are not provided in their content.
Half my comment was pointing out why, if there were increasing returns, that was consistent with our observations and supported by non-human examples.
No. I am objecting to your same line of thought that I have been objecting to quite from the start:
To repeat myself: this is empirically false, the domination is as we would expect for both increasing & decreasing marginal returns, and more broadly does not help us in putting anything but a lower bound on future developments such as selected humans or AIs.