I do wonder why Ray Kurzweil isn’t more concerned about the risk of a bad Singularity. I’m guessing he must have heard SIAI’s claims, since he co-founded the Singularity Summit along with SIAI. Has anyone put the question to him?
I think “simply crazy” is overstating it, but it’s striking he makes the same mistake that Wright and other critics make: SIAI’s work is focussed on AI risks, while the critics focus on AI benefits. This I assume is because rather than addressing what SIAI actually say, they’re addressing their somewhat religion-like picture of it.
I got the sense that he is very pessimistic about the chance of controlling things if they do go FOOM. If he is that pessimistic and also believes that the advance of AI will be virtually impossible to stop, then forgetting about will be as purposeful as worrying about it.
I think “simply crazy” is overstating it, but it’s striking he makes the same mistake that Wright and other critics make: SIAI’s work is focussed on AI risks, while the critics focus on AI benefits.
Well, I also try to focus on AI benefits. The critics fail because of broken models, not because of the choice of claims they try to address.
Crazy in which respect? It seemed to me that those critiques were narrow and mostly talking past Stross. The basic point that space is going to remain much more expensive and less pleasant than expansion on Earth for quite some time, conditioning on no major advances in AI, nanotechnology, biotechnology, etc, is perfectly reasonable. And Stross does so condition.
He has a few lines about it in The Singularity is Near, basically saying that FAI seems very hard (no foolproof solutions available, he says), but that AI will probably be well integrated. I don’t think he means “uploads come first, and manage AI after that,” as he predicts Turing-Test passing AIs well before uploads, but he has said things suggesting that those Turing Tests will be incomplete, with the AIs not capable of doing original AI research. Or he may mean that the ramp up in AI ability will be slow, and that IA will improve our ability to monitor and control AI systems institutionally, aided by non-FAI engineering of AI motivational systems and the like.
The rapture of the nerds, like space colonization, is likely to be a non-participatory event for 99.999% of humanity — unless we’re very unlucky. If it happens and it’s interested in us, all our plans go out the window. If it doesn’t happen, sitting around waiting for the AIs to save us from the rising sea level/oil shortage/intelligent bioengineered termites looks like being a Real Bad Idea. The best approach to the singularity is to apply Pascal’s Wager — in reverse — and plan on the assumption that it ain’t going to happen, much less save us from ourselves.
He doesn’t even consider the possibility of trying to nudge it in a good direction. It’s either “plan on the assumption that it ain’t going to happen”, or sit around waiting for AIs to save us.
ETA: The “He” in your second paragraph is Kurtzweil, I presume?
Thinking that FAI is extremely difficult or unlikely isn’t obviously crazy, but Stross isn’t just saying “don’t bother trying FAI” but rather “don’t bother trying anything with the aim of making a good Singularity more likely”. The first sentence of his answer, which I neglected to quote, is “Forget it.”
Pretty much how I read it. It should acknowledge the attempts to make a FAI, but it seems like a reasonable pessimistic opinion that FAI is too difficult to ever be pulled off successfully before strong AI in general.
Seems like a sensible default stance to me. Since humans exist, we know that a general intelligence can be built out of atoms, and since humans have many obvious flaws as physical computation systems, we know that any successful AGI is likely to end up at least weakly superhuman. There isn’t a similarly strong reason to assume a FAI can be built, and the argument for one seems to be more on the lines of things being likely to go pretty weird and bad for humans if one can’t be built but an AGI can.
Stross’s views are simply crazy. See his “21st Century FAQ” and others’ critiques of it.
I do wonder why Ray Kurzweil isn’t more concerned about the risk of a bad Singularity. I’m guessing he must have heard SIAI’s claims, since he co-founded the Singularity Summit along with SIAI. Has anyone put the question to him?
Re: “I do wonder why Ray Kurzweil isn’t more concerned about the risk of a bad Singularity”
http://www.cio.com/article/29790/Ray_Kurzweil_on_the_Promise_and_Peril_of_Technology_in_the_21st_Century
I think “simply crazy” is overstating it, but it’s striking he makes the same mistake that Wright and other critics make: SIAI’s work is focussed on AI risks, while the critics focus on AI benefits. This I assume is because rather than addressing what SIAI actually say, they’re addressing their somewhat religion-like picture of it.
I got the sense that he is very pessimistic about the chance of controlling things if they do go FOOM. If he is that pessimistic and also believes that the advance of AI will be virtually impossible to stop, then forgetting about will be as purposeful as worrying about it.
I think this is an accurate picture of Stross’ point.
Well, I also try to focus on AI benefits. The critics fail because of broken models, not because of the choice of claims they try to address.
Crazy in which respect? It seemed to me that those critiques were narrow and mostly talking past Stross. The basic point that space is going to remain much more expensive and less pleasant than expansion on Earth for quite some time, conditioning on no major advances in AI, nanotechnology, biotechnology, etc, is perfectly reasonable. And Stross does so condition.
He has a few lines about it in The Singularity is Near, basically saying that FAI seems very hard (no foolproof solutions available, he says), but that AI will probably be well integrated. I don’t think he means “uploads come first, and manage AI after that,” as he predicts Turing-Test passing AIs well before uploads, but he has said things suggesting that those Turing Tests will be incomplete, with the AIs not capable of doing original AI research. Or he may mean that the ramp up in AI ability will be slow, and that IA will improve our ability to monitor and control AI systems institutionally, aided by non-FAI engineering of AI motivational systems and the like.
Look at his answer for The Singularity:
He doesn’t even consider the possibility of trying to nudge it in a good direction. It’s either “plan on the assumption that it ain’t going to happen”, or sit around waiting for AIs to save us.
ETA: The “He” in your second paragraph is Kurtzweil, I presume?
That quote could also be interpreted as saying that UFAI is far more likely than FAI.
Thinking that FAI is extremely difficult or unlikely isn’t obviously crazy, but Stross isn’t just saying “don’t bother trying FAI” but rather “don’t bother trying anything with the aim of making a good Singularity more likely”. The first sentence of his answer, which I neglected to quote, is “Forget it.”
Pretty much how I read it. It should acknowledge the attempts to make a FAI, but it seems like a reasonable pessimistic opinion that FAI is too difficult to ever be pulled off successfully before strong AI in general.
Seems like a sensible default stance to me. Since humans exist, we know that a general intelligence can be built out of atoms, and since humans have many obvious flaws as physical computation systems, we know that any successful AGI is likely to end up at least weakly superhuman. There isn’t a similarly strong reason to assume a FAI can be built, and the argument for one seems to be more on the lines of things being likely to go pretty weird and bad for humans if one can’t be built but an AGI can.