Speaking as someone whose introduction to transhumanist ideas was the mind-altering idea shotgun titled Permutation City, I’ve been pretty disappointed with his take on AI and the existential risks crowd.
A reoccurring theme in Egan’s fiction is that “all minds face the same fundamental computing bottlenecks”, serving to establish the non-existence of large-scale intrinsic cognitive disparities. I always figured this was the sort of assumption that was introduced for the sake of telling a certain class of story—the kind that need only be plausible (e.g., “an asteroid is on course to hit us”), and didn’t think much more about it.
But from what I recall of Egan’s public comments on the issue of foom (I lack links, sorry) he appears to have a firm intuition that it’s impossible, grounded by handwaving “halting problem is unsolvable”-style arguments. Which in turn seemingly forms the basis of his estimation of uFAI scenarios as “immensely unlikely”. With no defense on offer for his initial “cognitive universality” assumption, he takes the only remaining course of argumentation...
but frankly, their attempt to prescribe what rational altruists should be doing with their time and money is just laughable
...derision.
This spring...
Egan, musing: Some people apparently find this irresistible
Greg Egan is...
Egan, screaming: The probabilities are approaching epsilon!!!
A reoccurring theme in Egan’s fiction is that “all minds face the same fundamental computing bottlenecks”, serving to establish the non-existence of large-scale intrinsic cognitive disparities.
This still allows for AIs to be millions of times faster than humans, undergo rapid population explosion and reduced training/experimentation times through digital copying, be superhumanly coordinated, bring up the average ability in each field to peak levels (as seen in any existing animal or machine, with obvious flaws repaired), etc. We know that human science can produce decisive tech and capacity gaps, and growth rates can change enormously even using the same cognitive hardware (Industrial Revolution).
I just don’t see how even extreme confidence in the impossibility of qualitative superintelligence rules out an explosion of AI capabilities.
Agreed, thanks for bringing this up—I threw away what I had on the subject because I was having trouble expressing it clearly. Strangely, Egan occasionally depicts civilizations rendered inaccessible by sheer difference of computing speed, so he’s clearly aware of how much room is available at the bottom.
Speaking as someone whose introduction to transhumanist ideas was the mind-altering idea shotgun titled Permutation City, I’ve been pretty disappointed with his take on AI and the existential risks crowd.
A reoccurring theme in Egan’s fiction is that “all minds face the same fundamental computing bottlenecks”, serving to establish the non-existence of large-scale intrinsic cognitive disparities. I always figured this was the sort of assumption that was introduced for the sake of telling a certain class of story—the kind that need only be plausible (e.g., “an asteroid is on course to hit us”), and didn’t think much more about it.
But from what I recall of Egan’s public comments on the issue of foom (I lack links, sorry) he appears to have a firm intuition that it’s impossible, grounded by handwaving “halting problem is unsolvable”-style arguments. Which in turn seemingly forms the basis of his estimation of uFAI scenarios as “immensely unlikely”. With no defense on offer for his initial “cognitive universality” assumption, he takes the only remaining course of argumentation...
...derision.
This spring...
Greg Egan is...
Above the Argument.
This still allows for AIs to be millions of times faster than humans, undergo rapid population explosion and reduced training/experimentation times through digital copying, be superhumanly coordinated, bring up the average ability in each field to peak levels (as seen in any existing animal or machine, with obvious flaws repaired), etc. We know that human science can produce decisive tech and capacity gaps, and growth rates can change enormously even using the same cognitive hardware (Industrial Revolution).
I just don’t see how even extreme confidence in the impossibility of qualitative superintelligence rules out an explosion of AI capabilities.
Agreed, thanks for bringing this up—I threw away what I had on the subject because I was having trouble expressing it clearly. Strangely, Egan occasionally depicts civilizations rendered inaccessible by sheer difference of computing speed, so he’s clearly aware of how much room is available at the bottom.