I’m really looking for a justification of the nuclear reactor metaphor for intelligence amplifying intelligence, on the software level.
AI might explode sure, but exponential intelligence amplification on the software level pretty much guarantees it on the first AI rather than us having to wait around and possibly merge before the explosion.
As I understand it, the nuclear reactor metaphor is simply another way of saying “explosive” or “trending at least exponentially”.
Note that most (admittedly fictional) descriptions of intelligence explosion include “bootstrapping” improved hardware (e.g. Greg Egan’s Crystal Nights)
If you want to see an hardware explosion, look to Moore’s law. For a software explosion, the number of lines of code being written is reputedly doubling even faster than every 18 months.
Software is gradually getting better. If you want to see how fast machine intelligence software is progressing, one reasonably-well measured area is chess/go ratings.
How can progress in such a narrow problem be representative of the efficacy of software either in some general sense or versus other narrow problems?
Also: what is the improvement over time of machine chess playing ability due to software changes once you subtract hardware improvements? I remember seeing vague claims that chess performance over the decades stayed fairly true to Moore’s Law, i.e. scaled with hardware. As a lower bound this is entirely unsurprising, since naive chess implementations (walk game tree to depth X) scale easily with both core speed and number of cores.
I’m really looking for a justification of the nuclear reactor metaphor for intelligence amplifying intelligence, on the software level.
AI might explode sure, but exponential intelligence amplification on the software level pretty much guarantees it on the first AI rather than us having to wait around and possibly merge before the explosion.
As I understand it, the nuclear reactor metaphor is simply another way of saying “explosive” or “trending at least exponentially”.
Note that most (admittedly fictional) descriptions of intelligence explosion include “bootstrapping” improved hardware (e.g. Greg Egan’s Crystal Nights)
Intelligence is building on itself today. That’s why we see the progress we do. See:
http://en.wikipedia.org/wiki/Intelligence_augmentation
If you want to see an hardware explosion, look to Moore’s law. For a software explosion, the number of lines of code being written is reputedly doubling even faster than every 18 months.
I want to see an explosion in the efficacy of software, not simply the amount that is written.
Software is gradually getting better. If you want to see how fast machine intelligence software is progressing, one reasonably-well measured area is chess/go ratings.
How can progress in such a narrow problem be representative of the efficacy of software either in some general sense or versus other narrow problems?
Also: what is the improvement over time of machine chess playing ability due to software changes once you subtract hardware improvements? I remember seeing vague claims that chess performance over the decades stayed fairly true to Moore’s Law, i.e. scaled with hardware. As a lower bound this is entirely unsurprising, since naive chess implementations (walk game tree to depth X) scale easily with both core speed and number of cores.