Robin thinks emulations will probably come before AI, that non-emulation AI would probably be developed by large commercial or military organizations, that AI capacity would ramp up relatively slowly, and that extensive safety measures will likely prevent organizations from losing control of their AIs. He says that still leaves enough of an existential risk to be worth working on, but I don’t know his current estimate. Also, some might differ from Robin in valuing a Darwinian/burning the cosmic commons outcome.
I don’t know of any charitable contributions Robin has made to any organization, or any public analysis or ranking of charities by him.
Robin gave me an all-AI-causes existential risk estimate of between 1% and 50%, meaning that he was confident that after he spent some more time thinking he would wind up giving a probability in that range.
Thanks, this is the kind of informed (I believe in Hansons case) contrarian third party opinions about main isuess that I perceive to be missing.
Surely I could have found out about this myself. But if I was going to wait until I first finished my studies of the basics, i.e. catch up on formal education, then read the relevant background information and afterwards all of LW, I could as well not donate to the SIAI at all for the next half decade.
Where is the summary that is available for other issues like climate change. The Talk.origins of existential risks, especially superhuman AI?
Robin thinks emulations will probably come before AI, that non-emulation AI would probably be developed by large commercial or military organizations, that AI capacity would ramp up relatively slowly, and that extensive safety measures will likely prevent organizations from losing control of their AIs. He says that still leaves enough of an existential risk to be worth working on, but I don’t know his current estimate. Also, some might differ from Robin in valuing a Darwinian/burning the cosmic commons outcome.
I don’t know of any charitable contributions Robin has made to any organization, or any public analysis or ranking of charities by him.
Robin gave me an all-AI-causes existential risk estimate of between 1% and 50%, meaning that he was confident that after he spent some more time thinking he would wind up giving a probability in that range.
Thanks, this is the kind of informed (I believe in Hansons case) contrarian third party opinions about main isuess that I perceive to be missing.
Surely I could have found out about this myself. But if I was going to wait until I first finished my studies of the basics, i.e. catch up on formal education, then read the relevant background information and afterwards all of LW, I could as well not donate to the SIAI at all for the next half decade.
Where is the summary that is available for other issues like climate change. The Talk.origins of existential risks, especially superhuman AI?