It was interesting to see the really negative comment from (presumably the real) Greg Egan:
The Yudkowsky/Bostrom strategy is to contrive probabilities for immensely unlikely scenarios, and adjust the figures until the expectation value for the benefits of working on — or donating to — their particular pet projects exceed the benefits of doing anything else. Combined with the appeal to vanity of “saving the universe”, some people apparently find this irresistible, but frankly, their attempt to prescribe what rational altruists should be doing with their time and money is just laughable, and it’s a shame you’ve given it so much air time.
Speaking as someone whose introduction to transhumanist ideas was the mind-altering idea shotgun titled Permutation City, I’ve been pretty disappointed with his take on AI and the existential risks crowd.
A reoccurring theme in Egan’s fiction is that “all minds face the same fundamental computing bottlenecks”, serving to establish the non-existence of large-scale intrinsic cognitive disparities. I always figured this was the sort of assumption that was introduced for the sake of telling a certain class of story—the kind that need only be plausible (e.g., “an asteroid is on course to hit us”), and didn’t think much more about it.
But from what I recall of Egan’s public comments on the issue of foom (I lack links, sorry) he appears to have a firm intuition that it’s impossible, grounded by handwaving “halting problem is unsolvable”-style arguments. Which in turn seemingly forms the basis of his estimation of uFAI scenarios as “immensely unlikely”. With no defense on offer for his initial “cognitive universality” assumption, he takes the only remaining course of argumentation...
but frankly, their attempt to prescribe what rational altruists should be doing with their time and money is just laughable
...derision.
This spring...
Egan, musing: Some people apparently find this irresistible
Greg Egan is...
Egan, screaming: The probabilities are approaching epsilon!!!
A reoccurring theme in Egan’s fiction is that “all minds face the same fundamental computing bottlenecks”, serving to establish the non-existence of large-scale intrinsic cognitive disparities.
This still allows for AIs to be millions of times faster than humans, undergo rapid population explosion and reduced training/experimentation times through digital copying, be superhumanly coordinated, bring up the average ability in each field to peak levels (as seen in any existing animal or machine, with obvious flaws repaired), etc. We know that human science can produce decisive tech and capacity gaps, and growth rates can change enormously even using the same cognitive hardware (Industrial Revolution).
I just don’t see how even extreme confidence in the impossibility of qualitative superintelligence rules out an explosion of AI capabilities.
Agreed, thanks for bringing this up—I threw away what I had on the subject because I was having trouble expressing it clearly. Strangely, Egan occasionally depicts civilizations rendered inaccessible by sheer difference of computing speed, so he’s clearly aware of how much room is available at the bottom.
From April 2008. Only in the last few comments does Egan actually express an argument for the key intuition that has been driving the entire rest of his reasoning.
(To my eyes, this intution of Egan’s refers to a completely irrelevant hypothetical, in which humans somehow magically and reliably are always able to acquire possession of and make appropriate use of any insentient software tools that will be required, at any given moment, in order for humans to maintain hypothetical strategic parity with any contemporary AI’s.)
I think Greg Egan makes an important point there that I have mentioned before and John Baez seems to agree:
I agree that multiplying a very large cost or benefit by a very small probability to calculate the expected utility of some action is a highly unstable way to make decisions.
Actually this was what I had in mind when I voiced my first attempt at criticizing the whole endeavour of friendly AI, I just didn’t know what exactly was causing my uneasiness.
I am still confused about it but think that it isn’t much of a problem as long as friendly AI research is not being funded at the cost of other risks that are more thoroughly based on empirical evidence rather than the observation of logically valid arguments.
To be clear, as I wrote in the post above, I think that there are very strong arguments in support of friendly AI research. I believe that it is currently the most important cause one could support, but I also think that there is a limit to what one should do in the name of mere logical implications. Therefore I partly agree with Greg Egan.
All of Yudkowsky’s arguments about the dangers and benefits of AI are just appeals to intuition of various kinds, as indeed are the counter-arguments. So I wouldn’t hold your breath waiting for that to be settled. If he wants to live his own life based on his own hunches, that’s fine, but I see no reason for anyone else to take his land-grabs on terms like “rationality” and “altruism” at all seriously, merely because it’s not currently possible to provide mathematically rigorous proofs that his assignments of probabilities to various scenarios are incorrect. There’s an almost limitless supply of people who believe that their ideas are of Earth-shattering importance, and that it’s incumbent on the rest of the world to either follow them or spend their life proving them wrong.
But clearly you’re showing no signs of throwing in productive work to devote your life to “Friendly AI” — or of selling a kidney in order to fund other people’s research in that area — so I should probably just breathe a sigh and relief, shut up and go back to my day job, until I have enough free time myself to contribute something useful to the Azimuth Project, get involved in refugee support again, or do any of the other “Rare Disease for Cute Kitten” activities on which the fate of all sentient life in the universe conspicuously does not hinge.
Surely not … Does Greg Egan understand how “a small chance every year” can build into “almost certain by this date”? Because that was convincing for me:
I can easily see humans building work-arounds or stop-gaps for most major problems, and continuing business mostly as usual. We run out of fossil fuels, so we get over our distrust of nuclear energy because it’s the only way. We don’t slow environmental damage enough, so agriculture suffers, so we get over our distrust of genetically modified plants because it’s the only way. And so on.
Then some article somewhere reminded me that business as usual includes repeated attempts at artificial intelligence. And runaway AI is not something we can build a work-around for; given a long enough timespan and faith in human ingenuity, we’ll push through all the other non-instant-game-over events until we finally succeed at making the game end instantly.
Yep, Egan created a Yudkowsky and Overcoming Bias/LessWrong stand-ins for mockery in his most recent novel, Zendegi. There was a Less Wrong discussion at the time.
It was interesting to see the really negative comment from (presumably the real) Greg Egan:
Speaking as someone whose introduction to transhumanist ideas was the mind-altering idea shotgun titled Permutation City, I’ve been pretty disappointed with his take on AI and the existential risks crowd.
A reoccurring theme in Egan’s fiction is that “all minds face the same fundamental computing bottlenecks”, serving to establish the non-existence of large-scale intrinsic cognitive disparities. I always figured this was the sort of assumption that was introduced for the sake of telling a certain class of story—the kind that need only be plausible (e.g., “an asteroid is on course to hit us”), and didn’t think much more about it.
But from what I recall of Egan’s public comments on the issue of foom (I lack links, sorry) he appears to have a firm intuition that it’s impossible, grounded by handwaving “halting problem is unsolvable”-style arguments. Which in turn seemingly forms the basis of his estimation of uFAI scenarios as “immensely unlikely”. With no defense on offer for his initial “cognitive universality” assumption, he takes the only remaining course of argumentation...
...derision.
This spring...
Greg Egan is...
Above the Argument.
This still allows for AIs to be millions of times faster than humans, undergo rapid population explosion and reduced training/experimentation times through digital copying, be superhumanly coordinated, bring up the average ability in each field to peak levels (as seen in any existing animal or machine, with obvious flaws repaired), etc. We know that human science can produce decisive tech and capacity gaps, and growth rates can change enormously even using the same cognitive hardware (Industrial Revolution).
I just don’t see how even extreme confidence in the impossibility of qualitative superintelligence rules out an explosion of AI capabilities.
Agreed, thanks for bringing this up—I threw away what I had on the subject because I was having trouble expressing it clearly. Strangely, Egan occasionally depicts civilizations rendered inaccessible by sheer difference of computing speed, so he’s clearly aware of how much room is available at the bottom.
Previous arguments by Egan:
http://metamagician3000.blogspot.com/2009/09/interview-with-greg-egan.html
Sept. 2009, from an interview in Aurealis.
http://metamagician3000.blogspot.com/2008/04/transhumanism-still-at-crossroads.html
From April 2008. Only in the last few comments does Egan actually express an argument for the key intuition that has been driving the entire rest of his reasoning.
(To my eyes, this intution of Egan’s refers to a completely irrelevant hypothetical, in which humans somehow magically and reliably are always able to acquire possession of and make appropriate use of any insentient software tools that will be required, at any given moment, in order for humans to maintain hypothetical strategic parity with any contemporary AI’s.)
You may want to quote a bit from it. That page has 185 comments, and the last ones by Egan seemed fairly innocuous to me.
Greg Egan’s view was discussed here a few months ago.
I think Greg Egan makes an important point there that I have mentioned before and John Baez seems to agree:
Actually this was what I had in mind when I voiced my first attempt at criticizing the whole endeavour of friendly AI, I just didn’t know what exactly was causing my uneasiness.
I am still confused about it but think that it isn’t much of a problem as long as friendly AI research is not being funded at the cost of other risks that are more thoroughly based on empirical evidence rather than the observation of logically valid arguments.
To be clear, as I wrote in the post above, I think that there are very strong arguments in support of friendly AI research. I believe that it is currently the most important cause one could support, but I also think that there is a limit to what one should do in the name of mere logical implications. Therefore I partly agree with Greg Egan.
ETA
There’s now another comment by Greg Egan:
Surely not … Does Greg Egan understand how “a small chance every year” can build into “almost certain by this date”? Because that was convincing for me:
I can easily see humans building work-arounds or stop-gaps for most major problems, and continuing business mostly as usual. We run out of fossil fuels, so we get over our distrust of nuclear energy because it’s the only way. We don’t slow environmental damage enough, so agriculture suffers, so we get over our distrust of genetically modified plants because it’s the only way. And so on.
Then some article somewhere reminded me that business as usual includes repeated attempts at artificial intelligence. And runaway AI is not something we can build a work-around for; given a long enough timespan and faith in human ingenuity, we’ll push through all the other non-instant-game-over events until we finally succeed at making the game end instantly.
If independent.
Yep, Egan created a Yudkowsky and Overcoming Bias/LessWrong stand-ins for mockery in his most recent novel, Zendegi. There was a Less Wrong discussion at the time.