That be great. I’d be excited to have as many opinions as possible about the SIAI from people that are not associated with it.
I wonder if we could get some experts to actual write a informed critique about the whole matter. Not just some SF writers. Although I think Stross is probably as educated as EY.
What is Robin Hanson’ opinion about all this, does anyone know? Is he as worried about the issues in question? Is he donating to the SIAI?
Robin thinks emulations will probably come before AI, that non-emulation AI would probably be developed by large commercial or military organizations, that AI capacity would ramp up relatively slowly, and that extensive safety measures will likely prevent organizations from losing control of their AIs. He says that still leaves enough of an existential risk to be worth working on, but I don’t know his current estimate. Also, some might differ from Robin in valuing a Darwinian/burning the cosmic commons outcome.
I don’t know of any charitable contributions Robin has made to any organization, or any public analysis or ranking of charities by him.
Robin gave me an all-AI-causes existential risk estimate of between 1% and 50%, meaning that he was confident that after he spent some more time thinking he would wind up giving a probability in that range.
Thanks, this is the kind of informed (I believe in Hansons case) contrarian third party opinions about main isuess that I perceive to be missing.
Surely I could have found out about this myself. But if I was going to wait until I first finished my studies of the basics, i.e. catch up on formal education, then read the relevant background information and afterwards all of LW, I could as well not donate to the SIAI at all for the next half decade.
Where is the summary that is available for other issues like climate change. The Talk.origins of existential risks, especially superhuman AI?
Robin Hanson said that he thought the probability of an AI being able to foom and destroy the world was about 1%. However, note that since this would be a 1% chance of destroying the world, he considers it reasonable to take precautions against this.
That’s AI built by a very small group fooming to take over the world at 1%, going from a millionth or less of the rest of the world economy to much larger very quickly. That doesn’t account for risk from AI built by large corporations or governments, Darwinian AI evolution destroying everything we value, AI arms race leading to war (and accidental screwups), etc. His AI (80% of which he says is brain emulations) x-risk estimate is higher. He says between 1% and 50%.
Me looking for some form of peer review is deemed to be bizarre? It is not my desire to crush the SIAI but to figure out what is the right thing to do.
You know what I would call bizarre? That someone writes in bold and all caps calling someone an idiot and afterwards banning his post. All that based on ideas that themselves are resulting from and based on unsupported claims. That is what EY is doing and I am trying to assess the credibility of such reactions.
You know what I would call bizarre? That someone writes in bold and all caps calling someone an idiot and afterwards banning his post. All that based on ideas that themselves are resulting from and based on unsupported claims. That is what EY is doing and I am trying to assess the credibility of such reactions.
EY is one of the smartest people on the planet and this has been his life’s work for about 14 years. (He started SIAI in 2000.) By your own admission, you do not have the educational achievements necessary to evaluate his work, so it is not surprising that a small fraction of his public statements will seem bizarre to you because 14 years is plenty of time for Eliezer and his friends to have arrived at beliefs at very great inferential distance from any of your beliefs.
Humans are designed (by natural selection) to mistrust statements at large inferential distances from what they already believe. Human were not designed for a world (like the world of today) where there exists so much accurate knowledge of reality no one can know it all, and people have to specialize. Part of the process of becoming educated is learning to ignore your natural human incredulity at statements at large inferential distances from what you already believe.
People have a natural human mistrust of attempts by insiders to stifle discussion and to hoard strategic pieces of knowledge because in the past those things usually led to oppression or domination of outsiders by the insiders. I assure you that there is no danger of anything like that happening here. We cannot operate a society as complex and filled with dangerous knowledge as our society is on the principle that everyone discusses everything in public. It is not always true that the more people involved in a decision, the more correct and moral the decision will turn out. Some decisions do not work that way. We do not for example freely distribute knowledge of how to make nuclear weapons. It is almost a certainty that some group somewhere would make irresponsible and extremely destructive decisions with that knowledge.
About half of the regular readers of Less Wrong saw the deleted post, and the vast majority (including me) of those who saw it agree with or are willing to accept Eliezer’s decision to delete it. Anyone can become a regular reader of Less Wrong: one does not have to be accepted by Eliezer or SIAI or promise to be loyal to Eliezer or SIAI.
Can you even judge that without being as smart yourself? And how many people on the planet do you know? I know you likely just said this for other purposes, but I want to highlight the risk of believing him to be THAT smart and consenquently believe what he is saying based on your believe that he is smart.
...you do not have the educational achievements necessary to evaluate his work...
That is right, or might be, as the evidence that I could evaluate seems to be missing.
...4 years is plenty of time for Eliezer and his friends to have arrived at beliefs at very great inferential distance from any of your beliefs.
True, but in the case of evolution you are more likely to be able to follow the chain of subsequent conclusions. In the case of evolution evidence isn’t far, it’s not beneath 14 years of ideas based on some hypothesis. In the case of the SIAI it rather seems to be that there are hypotheses based on other hypotheses that are not yet tested.
About half of the regular readers of Less Wrong saw the banned post, and the vast majority (including me) of those who saw it agree with or are willing to accept Eliezer’s decision to delete it.
And my guess is that not one of them could explain their reasoning to support the censorship of ideas to an extent that would accommodate for such a decision. They will just base their reasoning on arguments of unknown foundation previously made by EY.
Great, thank you! I was thinking of asking some people to actually comment here.
I plan on asking Stross about this next time I visit Edinburgh, if he’s in town.
That be great. I’d be excited to have as many opinions as possible about the SIAI from people that are not associated with it.
I wonder if we could get some experts to actual write a informed critique about the whole matter. Not just some SF writers. Although I think Stross is probably as educated as EY.
What is Robin Hanson’ opinion about all this, does anyone know? Is he as worried about the issues in question? Is he donating to the SIAI?
Robin thinks emulations will probably come before AI, that non-emulation AI would probably be developed by large commercial or military organizations, that AI capacity would ramp up relatively slowly, and that extensive safety measures will likely prevent organizations from losing control of their AIs. He says that still leaves enough of an existential risk to be worth working on, but I don’t know his current estimate. Also, some might differ from Robin in valuing a Darwinian/burning the cosmic commons outcome.
I don’t know of any charitable contributions Robin has made to any organization, or any public analysis or ranking of charities by him.
Robin gave me an all-AI-causes existential risk estimate of between 1% and 50%, meaning that he was confident that after he spent some more time thinking he would wind up giving a probability in that range.
Thanks, this is the kind of informed (I believe in Hansons case) contrarian third party opinions about main isuess that I perceive to be missing.
Surely I could have found out about this myself. But if I was going to wait until I first finished my studies of the basics, i.e. catch up on formal education, then read the relevant background information and afterwards all of LW, I could as well not donate to the SIAI at all for the next half decade.
Where is the summary that is available for other issues like climate change. The Talk.origins of existential risks, especially superhuman AI?
Robin Hanson said that he thought the probability of an AI being able to foom and destroy the world was about 1%. However, note that since this would be a 1% chance of destroying the world, he considers it reasonable to take precautions against this.
That’s AI built by a very small group fooming to take over the world at 1%, going from a millionth or less of the rest of the world economy to much larger very quickly. That doesn’t account for risk from AI built by large corporations or governments, Darwinian AI evolution destroying everything we value, AI arms race leading to war (and accidental screwups), etc. His AI (80% of which he says is brain emulations) x-risk estimate is higher. He says between 1% and 50%.
OK this is getting bizarre now. You seem to be trying to recruit an anti-SIAC Legion of Doom… via a comment thread on LW.
Me looking for some form of peer review is deemed to be bizarre? It is not my desire to crush the SIAI but to figure out what is the right thing to do.
You know what I would call bizarre? That someone writes in bold and all caps calling someone an idiot and afterwards banning his post. All that based on ideas that themselves are resulting from and based on unsupported claims. That is what EY is doing and I am trying to assess the credibility of such reactions.
EY is one of the smartest people on the planet and this has been his life’s work for about 14 years. (He started SIAI in 2000.) By your own admission, you do not have the educational achievements necessary to evaluate his work, so it is not surprising that a small fraction of his public statements will seem bizarre to you because 14 years is plenty of time for Eliezer and his friends to have arrived at beliefs at very great inferential distance from any of your beliefs.
Humans are designed (by natural selection) to mistrust statements at large inferential distances from what they already believe. Human were not designed for a world (like the world of today) where there exists so much accurate knowledge of reality no one can know it all, and people have to specialize. Part of the process of becoming educated is learning to ignore your natural human incredulity at statements at large inferential distances from what you already believe.
People have a natural human mistrust of attempts by insiders to stifle discussion and to hoard strategic pieces of knowledge because in the past those things usually led to oppression or domination of outsiders by the insiders. I assure you that there is no danger of anything like that happening here. We cannot operate a society as complex and filled with dangerous knowledge as our society is on the principle that everyone discusses everything in public. It is not always true that the more people involved in a decision, the more correct and moral the decision will turn out. Some decisions do not work that way. We do not for example freely distribute knowledge of how to make nuclear weapons. It is almost a certainty that some group somewhere would make irresponsible and extremely destructive decisions with that knowledge.
About half of the regular readers of Less Wrong saw the deleted post, and the vast majority (including me) of those who saw it agree with or are willing to accept Eliezer’s decision to delete it. Anyone can become a regular reader of Less Wrong: one does not have to be accepted by Eliezer or SIAI or promise to be loyal to Eliezer or SIAI.
Can you even judge that without being as smart yourself? And how many people on the planet do you know? I know you likely just said this for other purposes, but I want to highlight the risk of believing him to be THAT smart and consenquently believe what he is saying based on your believe that he is smart.
That is right, or might be, as the evidence that I could evaluate seems to be missing.
True, but in the case of evolution you are more likely to be able to follow the chain of subsequent conclusions. In the case of evolution evidence isn’t far, it’s not beneath 14 years of ideas based on some hypothesis. In the case of the SIAI it rather seems to be that there are hypotheses based on other hypotheses that are not yet tested.
And my guess is that not one of them could explain their reasoning to support the censorship of ideas to an extent that would accommodate for such a decision. They will just base their reasoning on arguments of unknown foundation previously made by EY.