Lets all try not to confuse SF writers with futurists and neither with researchers or engineers.
Stories follow the rules of awesome, or they don’t sell well. There is a wonderful Letter from Heinlein to a fan that asked why he wrote, and the top answer was:‘to put food on the table’. It is probably online, but I could not find it atm. Comparing the work of the SIAI to any particular writer is like comparing the british navy with Jack London.
I do wonder why Ray Kurzweil isn’t more concerned about the risk of a bad Singularity. I’m guessing he must have heard SIAI’s claims, since he co-founded the Singularity Summit along with SIAI. Has anyone put the question to him?
I think “simply crazy” is overstating it, but it’s striking he makes the same mistake that Wright and other critics make: SIAI’s work is focussed on AI risks, while the critics focus on AI benefits. This I assume is because rather than addressing what SIAI actually say, they’re addressing their somewhat religion-like picture of it.
I got the sense that he is very pessimistic about the chance of controlling things if they do go FOOM. If he is that pessimistic and also believes that the advance of AI will be virtually impossible to stop, then forgetting about will be as purposeful as worrying about it.
I think “simply crazy” is overstating it, but it’s striking he makes the same mistake that Wright and other critics make: SIAI’s work is focussed on AI risks, while the critics focus on AI benefits.
Well, I also try to focus on AI benefits. The critics fail because of broken models, not because of the choice of claims they try to address.
Crazy in which respect? It seemed to me that those critiques were narrow and mostly talking past Stross. The basic point that space is going to remain much more expensive and less pleasant than expansion on Earth for quite some time, conditioning on no major advances in AI, nanotechnology, biotechnology, etc, is perfectly reasonable. And Stross does so condition.
He has a few lines about it in The Singularity is Near, basically saying that FAI seems very hard (no foolproof solutions available, he says), but that AI will probably be well integrated. I don’t think he means “uploads come first, and manage AI after that,” as he predicts Turing-Test passing AIs well before uploads, but he has said things suggesting that those Turing Tests will be incomplete, with the AIs not capable of doing original AI research. Or he may mean that the ramp up in AI ability will be slow, and that IA will improve our ability to monitor and control AI systems institutionally, aided by non-FAI engineering of AI motivational systems and the like.
The rapture of the nerds, like space colonization, is likely to be a non-participatory event for 99.999% of humanity — unless we’re very unlucky. If it happens and it’s interested in us, all our plans go out the window. If it doesn’t happen, sitting around waiting for the AIs to save us from the rising sea level/oil shortage/intelligent bioengineered termites looks like being a Real Bad Idea. The best approach to the singularity is to apply Pascal’s Wager — in reverse — and plan on the assumption that it ain’t going to happen, much less save us from ourselves.
He doesn’t even consider the possibility of trying to nudge it in a good direction. It’s either “plan on the assumption that it ain’t going to happen”, or sit around waiting for AIs to save us.
ETA: The “He” in your second paragraph is Kurtzweil, I presume?
Thinking that FAI is extremely difficult or unlikely isn’t obviously crazy, but Stross isn’t just saying “don’t bother trying FAI” but rather “don’t bother trying anything with the aim of making a good Singularity more likely”. The first sentence of his answer, which I neglected to quote, is “Forget it.”
Pretty much how I read it. It should acknowledge the attempts to make a FAI, but it seems like a reasonable pessimistic opinion that FAI is too difficult to ever be pulled off successfully before strong AI in general.
Seems like a sensible default stance to me. Since humans exist, we know that a general intelligence can be built out of atoms, and since humans have many obvious flaws as physical computation systems, we know that any successful AGI is likely to end up at least weakly superhuman. There isn’t a similarly strong reason to assume a FAI can be built, and the argument for one seems to be more on the lines of things being likely to go pretty weird and bad for humans if one can’t be built but an AGI can.
If someone like me who failed secondary school can come up with such ideas before coming across the SIAI, I thought that someone who writes SF novels about the idea of a technological singualrity might too. And you don’t have to link me to the post about ‘Generalizing From One Example’, I’m aware of it.
And Charles Stross was not the only person that I named, by the way. At least one of those people is a member on this site.
At least one of those people is a member on this site.
If you’re referring to Gary Drescher, I forwarded him a link of your post, and asked him what his views of SIAI actually are. He said that he’s tied up for the next couple of days, but will reply by the weekend.
That be great. I’d be excited to have as many opinions as possible about the SIAI from people that are not associated with it.
I wonder if we could get some experts to actual write a informed critique about the whole matter. Not just some SF writers. Although I think Stross is probably as educated as EY.
What is Robin Hanson’ opinion about all this, does anyone know? Is he as worried about the issues in question? Is he donating to the SIAI?
Robin thinks emulations will probably come before AI, that non-emulation AI would probably be developed by large commercial or military organizations, that AI capacity would ramp up relatively slowly, and that extensive safety measures will likely prevent organizations from losing control of their AIs. He says that still leaves enough of an existential risk to be worth working on, but I don’t know his current estimate. Also, some might differ from Robin in valuing a Darwinian/burning the cosmic commons outcome.
I don’t know of any charitable contributions Robin has made to any organization, or any public analysis or ranking of charities by him.
Robin gave me an all-AI-causes existential risk estimate of between 1% and 50%, meaning that he was confident that after he spent some more time thinking he would wind up giving a probability in that range.
Thanks, this is the kind of informed (I believe in Hansons case) contrarian third party opinions about main isuess that I perceive to be missing.
Surely I could have found out about this myself. But if I was going to wait until I first finished my studies of the basics, i.e. catch up on formal education, then read the relevant background information and afterwards all of LW, I could as well not donate to the SIAI at all for the next half decade.
Where is the summary that is available for other issues like climate change. The Talk.origins of existential risks, especially superhuman AI?
Robin Hanson said that he thought the probability of an AI being able to foom and destroy the world was about 1%. However, note that since this would be a 1% chance of destroying the world, he considers it reasonable to take precautions against this.
That’s AI built by a very small group fooming to take over the world at 1%, going from a millionth or less of the rest of the world economy to much larger very quickly. That doesn’t account for risk from AI built by large corporations or governments, Darwinian AI evolution destroying everything we value, AI arms race leading to war (and accidental screwups), etc. His AI (80% of which he says is brain emulations) x-risk estimate is higher. He says between 1% and 50%.
Me looking for some form of peer review is deemed to be bizarre? It is not my desire to crush the SIAI but to figure out what is the right thing to do.
You know what I would call bizarre? That someone writes in bold and all caps calling someone an idiot and afterwards banning his post. All that based on ideas that themselves are resulting from and based on unsupported claims. That is what EY is doing and I am trying to assess the credibility of such reactions.
You know what I would call bizarre? That someone writes in bold and all caps calling someone an idiot and afterwards banning his post. All that based on ideas that themselves are resulting from and based on unsupported claims. That is what EY is doing and I am trying to assess the credibility of such reactions.
EY is one of the smartest people on the planet and this has been his life’s work for about 14 years. (He started SIAI in 2000.) By your own admission, you do not have the educational achievements necessary to evaluate his work, so it is not surprising that a small fraction of his public statements will seem bizarre to you because 14 years is plenty of time for Eliezer and his friends to have arrived at beliefs at very great inferential distance from any of your beliefs.
Humans are designed (by natural selection) to mistrust statements at large inferential distances from what they already believe. Human were not designed for a world (like the world of today) where there exists so much accurate knowledge of reality no one can know it all, and people have to specialize. Part of the process of becoming educated is learning to ignore your natural human incredulity at statements at large inferential distances from what you already believe.
People have a natural human mistrust of attempts by insiders to stifle discussion and to hoard strategic pieces of knowledge because in the past those things usually led to oppression or domination of outsiders by the insiders. I assure you that there is no danger of anything like that happening here. We cannot operate a society as complex and filled with dangerous knowledge as our society is on the principle that everyone discusses everything in public. It is not always true that the more people involved in a decision, the more correct and moral the decision will turn out. Some decisions do not work that way. We do not for example freely distribute knowledge of how to make nuclear weapons. It is almost a certainty that some group somewhere would make irresponsible and extremely destructive decisions with that knowledge.
About half of the regular readers of Less Wrong saw the deleted post, and the vast majority (including me) of those who saw it agree with or are willing to accept Eliezer’s decision to delete it. Anyone can become a regular reader of Less Wrong: one does not have to be accepted by Eliezer or SIAI or promise to be loyal to Eliezer or SIAI.
Can you even judge that without being as smart yourself? And how many people on the planet do you know? I know you likely just said this for other purposes, but I want to highlight the risk of believing him to be THAT smart and consenquently believe what he is saying based on your believe that he is smart.
...you do not have the educational achievements necessary to evaluate his work...
That is right, or might be, as the evidence that I could evaluate seems to be missing.
...4 years is plenty of time for Eliezer and his friends to have arrived at beliefs at very great inferential distance from any of your beliefs.
True, but in the case of evolution you are more likely to be able to follow the chain of subsequent conclusions. In the case of evolution evidence isn’t far, it’s not beneath 14 years of ideas based on some hypothesis. In the case of the SIAI it rather seems to be that there are hypotheses based on other hypotheses that are not yet tested.
About half of the regular readers of Less Wrong saw the banned post, and the vast majority (including me) of those who saw it agree with or are willing to accept Eliezer’s decision to delete it.
And my guess is that not one of them could explain their reasoning to support the censorship of ideas to an extent that would accommodate for such a decision. They will just base their reasoning on arguments of unknown foundation previously made by EY.
I’d be surprised if he hasn’t at least come across the early arguments; he was active on the Extropy-Chat mailing list the same time as Eliezer. I didn’t follow it closely enough to see if their paths crossed though.
Do you have any reason to suppose that Charlie Stross has even considered SIAI’s claims?
Lets all try not to confuse SF writers with futurists and neither with researchers or engineers. Stories follow the rules of awesome, or they don’t sell well. There is a wonderful Letter from Heinlein to a fan that asked why he wrote, and the top answer was:‘to put food on the table’. It is probably online, but I could not find it atm. Comparing the work of the SIAI to any particular writer is like comparing the british navy with Jack London.
Heinlein also described himself as competing for his reader’s beer money.
This is kind of off topic but I think the prospects being depicted on LW etc. are more awesome than a lot of SF stories.
Stross’s views are simply crazy. See his “21st Century FAQ” and others’ critiques of it.
I do wonder why Ray Kurzweil isn’t more concerned about the risk of a bad Singularity. I’m guessing he must have heard SIAI’s claims, since he co-founded the Singularity Summit along with SIAI. Has anyone put the question to him?
Re: “I do wonder why Ray Kurzweil isn’t more concerned about the risk of a bad Singularity”
http://www.cio.com/article/29790/Ray_Kurzweil_on_the_Promise_and_Peril_of_Technology_in_the_21st_Century
I think “simply crazy” is overstating it, but it’s striking he makes the same mistake that Wright and other critics make: SIAI’s work is focussed on AI risks, while the critics focus on AI benefits. This I assume is because rather than addressing what SIAI actually say, they’re addressing their somewhat religion-like picture of it.
I got the sense that he is very pessimistic about the chance of controlling things if they do go FOOM. If he is that pessimistic and also believes that the advance of AI will be virtually impossible to stop, then forgetting about will be as purposeful as worrying about it.
I think this is an accurate picture of Stross’ point.
Well, I also try to focus on AI benefits. The critics fail because of broken models, not because of the choice of claims they try to address.
Crazy in which respect? It seemed to me that those critiques were narrow and mostly talking past Stross. The basic point that space is going to remain much more expensive and less pleasant than expansion on Earth for quite some time, conditioning on no major advances in AI, nanotechnology, biotechnology, etc, is perfectly reasonable. And Stross does so condition.
He has a few lines about it in The Singularity is Near, basically saying that FAI seems very hard (no foolproof solutions available, he says), but that AI will probably be well integrated. I don’t think he means “uploads come first, and manage AI after that,” as he predicts Turing-Test passing AIs well before uploads, but he has said things suggesting that those Turing Tests will be incomplete, with the AIs not capable of doing original AI research. Or he may mean that the ramp up in AI ability will be slow, and that IA will improve our ability to monitor and control AI systems institutionally, aided by non-FAI engineering of AI motivational systems and the like.
Look at his answer for The Singularity:
He doesn’t even consider the possibility of trying to nudge it in a good direction. It’s either “plan on the assumption that it ain’t going to happen”, or sit around waiting for AIs to save us.
ETA: The “He” in your second paragraph is Kurtzweil, I presume?
That quote could also be interpreted as saying that UFAI is far more likely than FAI.
Thinking that FAI is extremely difficult or unlikely isn’t obviously crazy, but Stross isn’t just saying “don’t bother trying FAI” but rather “don’t bother trying anything with the aim of making a good Singularity more likely”. The first sentence of his answer, which I neglected to quote, is “Forget it.”
Pretty much how I read it. It should acknowledge the attempts to make a FAI, but it seems like a reasonable pessimistic opinion that FAI is too difficult to ever be pulled off successfully before strong AI in general.
Seems like a sensible default stance to me. Since humans exist, we know that a general intelligence can be built out of atoms, and since humans have many obvious flaws as physical computation systems, we know that any successful AGI is likely to end up at least weakly superhuman. There isn’t a similarly strong reason to assume a FAI can be built, and the argument for one seems to be more on the lines of things being likely to go pretty weird and bad for humans if one can’t be built but an AGI can.
If someone like me who failed secondary school can come up with such ideas before coming across the SIAI, I thought that someone who writes SF novels about the idea of a technological singualrity might too. And you don’t have to link me to the post about ‘Generalizing From One Example’, I’m aware of it.
And Charles Stross was not the only person that I named, by the way. At least one of those people is a member on this site.
If you’re referring to Gary Drescher, I forwarded him a link of your post, and asked him what his views of SIAI actually are. He said that he’s tied up for the next couple of days, but will reply by the weekend.
Great, thank you! I was thinking of asking some people to actually comment here.
I plan on asking Stross about this next time I visit Edinburgh, if he’s in town.
That be great. I’d be excited to have as many opinions as possible about the SIAI from people that are not associated with it.
I wonder if we could get some experts to actual write a informed critique about the whole matter. Not just some SF writers. Although I think Stross is probably as educated as EY.
What is Robin Hanson’ opinion about all this, does anyone know? Is he as worried about the issues in question? Is he donating to the SIAI?
Robin thinks emulations will probably come before AI, that non-emulation AI would probably be developed by large commercial or military organizations, that AI capacity would ramp up relatively slowly, and that extensive safety measures will likely prevent organizations from losing control of their AIs. He says that still leaves enough of an existential risk to be worth working on, but I don’t know his current estimate. Also, some might differ from Robin in valuing a Darwinian/burning the cosmic commons outcome.
I don’t know of any charitable contributions Robin has made to any organization, or any public analysis or ranking of charities by him.
Robin gave me an all-AI-causes existential risk estimate of between 1% and 50%, meaning that he was confident that after he spent some more time thinking he would wind up giving a probability in that range.
Thanks, this is the kind of informed (I believe in Hansons case) contrarian third party opinions about main isuess that I perceive to be missing.
Surely I could have found out about this myself. But if I was going to wait until I first finished my studies of the basics, i.e. catch up on formal education, then read the relevant background information and afterwards all of LW, I could as well not donate to the SIAI at all for the next half decade.
Where is the summary that is available for other issues like climate change. The Talk.origins of existential risks, especially superhuman AI?
Robin Hanson said that he thought the probability of an AI being able to foom and destroy the world was about 1%. However, note that since this would be a 1% chance of destroying the world, he considers it reasonable to take precautions against this.
That’s AI built by a very small group fooming to take over the world at 1%, going from a millionth or less of the rest of the world economy to much larger very quickly. That doesn’t account for risk from AI built by large corporations or governments, Darwinian AI evolution destroying everything we value, AI arms race leading to war (and accidental screwups), etc. His AI (80% of which he says is brain emulations) x-risk estimate is higher. He says between 1% and 50%.
OK this is getting bizarre now. You seem to be trying to recruit an anti-SIAC Legion of Doom… via a comment thread on LW.
Me looking for some form of peer review is deemed to be bizarre? It is not my desire to crush the SIAI but to figure out what is the right thing to do.
You know what I would call bizarre? That someone writes in bold and all caps calling someone an idiot and afterwards banning his post. All that based on ideas that themselves are resulting from and based on unsupported claims. That is what EY is doing and I am trying to assess the credibility of such reactions.
EY is one of the smartest people on the planet and this has been his life’s work for about 14 years. (He started SIAI in 2000.) By your own admission, you do not have the educational achievements necessary to evaluate his work, so it is not surprising that a small fraction of his public statements will seem bizarre to you because 14 years is plenty of time for Eliezer and his friends to have arrived at beliefs at very great inferential distance from any of your beliefs.
Humans are designed (by natural selection) to mistrust statements at large inferential distances from what they already believe. Human were not designed for a world (like the world of today) where there exists so much accurate knowledge of reality no one can know it all, and people have to specialize. Part of the process of becoming educated is learning to ignore your natural human incredulity at statements at large inferential distances from what you already believe.
People have a natural human mistrust of attempts by insiders to stifle discussion and to hoard strategic pieces of knowledge because in the past those things usually led to oppression or domination of outsiders by the insiders. I assure you that there is no danger of anything like that happening here. We cannot operate a society as complex and filled with dangerous knowledge as our society is on the principle that everyone discusses everything in public. It is not always true that the more people involved in a decision, the more correct and moral the decision will turn out. Some decisions do not work that way. We do not for example freely distribute knowledge of how to make nuclear weapons. It is almost a certainty that some group somewhere would make irresponsible and extremely destructive decisions with that knowledge.
About half of the regular readers of Less Wrong saw the deleted post, and the vast majority (including me) of those who saw it agree with or are willing to accept Eliezer’s decision to delete it. Anyone can become a regular reader of Less Wrong: one does not have to be accepted by Eliezer or SIAI or promise to be loyal to Eliezer or SIAI.
Can you even judge that without being as smart yourself? And how many people on the planet do you know? I know you likely just said this for other purposes, but I want to highlight the risk of believing him to be THAT smart and consenquently believe what he is saying based on your believe that he is smart.
That is right, or might be, as the evidence that I could evaluate seems to be missing.
True, but in the case of evolution you are more likely to be able to follow the chain of subsequent conclusions. In the case of evolution evidence isn’t far, it’s not beneath 14 years of ideas based on some hypothesis. In the case of the SIAI it rather seems to be that there are hypotheses based on other hypotheses that are not yet tested.
And my guess is that not one of them could explain their reasoning to support the censorship of ideas to an extent that would accommodate for such a decision. They will just base their reasoning on arguments of unknown foundation previously made by EY.
I’d be surprised if he hasn’t at least come across the early arguments; he was active on the Extropy-Chat mailing list the same time as Eliezer. I didn’t follow it closely enough to see if their paths crossed though.