We might call this assumption a proportionality thesis: it holds that increases in intelligence
(or increases of a certain sort) always lead to proportionate increases in the capacity to design
intelligent systems. Perhaps the most promising way for an opponent to resist is to suggest that this
thesis may fail. It might fail because here are upper limits in intelligence space, as with resistance
to the last premise. It might fail because there are points of diminishing returns: perhaps beyond a
certain point, a 10% increase in intelligence yields only a 5% increase at the next generation, which
yields only a 2.5% increase at the next generation, and so on. It might fail because intelligence
does not correlate well with design capacity: systems that are more intelligent need not be better
designers. I will return to resistance of these sorts in section 4, under “structural obstacles”.
Also note that Chalmers (2010) says that perhaps “the most promising way to resist” the argument for intelligence explosion is to suggest that the proportionality thesis may fail. Given this, Chalmers (2012) expresses “a mild disappointment” that of the 27 authors who commented on Chalmers (2010) for a special issue of Journal of Consciousness Studies, none focused on the proportionality thesis.
What is the proportionality thesis in the context of Intelligence Explosion?
The one I googled says something about the worst punishments for the worst crimes.
From David Chalmers’ paper:
Also note that Chalmers (2010) says that perhaps “the most promising way to resist” the argument for intelligence explosion is to suggest that the proportionality thesis may fail. Given this, Chalmers (2012) expresses “a mild disappointment” that of the 27 authors who commented on Chalmers (2010) for a special issue of Journal of Consciousness Studies, none focused on the proportionality thesis.
Thank you! Kaj and Luke. I am reading the singularity reply essay by Chalmers right now.