Not surprisingly, Ernest Rutherford, Albert Einstein, and Niels Bohr regarded particle bombardment as useful in furthering knowledge of nuclear physics but believed it unlikely to meet public expectations of harnessing the power of the atom for practical purposes anytime in the near future. In a 1933 interview Rutherford called such expectations “moonshine.” Einstein compared particle bombardment with shooting in the dark at scarce birds, while Bohr, the Danish Nobel laureate, agreed that the chances of taming atomic energy were remote.
This information has caused me to update my beliefs about how heavily to weight expert opinions about the likelihood of the advancement of a given hypothetical technology. I plan on reading more about the history of the atomic bomb and will think about this matter some more. Note however:
(a) The point that I make in the above article about there being a selection effect where people notice those scientific speculations that actually pan out disproportionately.
(b) XiXiDu’s response below in which he (correctly) infers my view that SIAI’s research agenda looks to be too broad and general to tackle the Friendly AI problem effectively.
Of these two points, the first doesn’t seem so significant to me (small probability; high expected return); the point (b) seems much more significant to me as in absence of a compelling research agenda Friendly AI research seems to me to have vanishingly
small probability of success. Now; I don’t think that Friendly AI research by humans will inevitably have vanishingly small probability of success; I could imagine suddenly making the problem look tractable as happened for the atomic bomb; I’m pointing out that the problem seems to require much finer taskification than the SIAI research program sets out in order to be tractable.
Regarding (5): Supposing that the SIAI staff and/or donors decide that Friendly AI research turns out to have low utilitarian expected value, I could imagine easily imagine SIAI restructuring or rebranding to work toward higher utilitarian expected value activities.
Eliezer has done a lot to spread rationality between his upcoming rationality book, popular Harry Potter fanfiction and creation of Less Wrong. I’ve heard that the SIAI visiting fellows program has done a good job of building a community of people of high intellectual caliber devoted to existential risk reduction.
My understanding is that many of the recent papers (whether published or in progress) by Carl Shulman, Anna Salamon, Steve Rayhawk and Peter de Blanc as well as the SIAI Uncertain Future software application fall under the heading of advocacy/forecasting rather than Friendly AI research.
If I make a top level post about this subject that’s critical of Friendly AI research I’ll be sure to point out the many positive contributions of SIAI staff that fall outside of the domain of Friendly AI research.
Thanks for the feedback.
Regarding (1): I looked up historical information on the development of the atomic bomb. According to The Manhattan Project: Making the Atomic Bomb:
This information has caused me to update my beliefs about how heavily to weight expert opinions about the likelihood of the advancement of a given hypothetical technology. I plan on reading more about the history of the atomic bomb and will think about this matter some more. Note however:
(a) The point that I make in the above article about there being a selection effect where people notice those scientific speculations that actually pan out disproportionately.
(b) XiXiDu’s response below in which he (correctly) infers my view that SIAI’s research agenda looks to be too broad and general to tackle the Friendly AI problem effectively.
Of these two points, the first doesn’t seem so significant to me (small probability; high expected return); the point (b) seems much more significant to me as in absence of a compelling research agenda Friendly AI research seems to me to have vanishingly small probability of success. Now; I don’t think that Friendly AI research by humans will inevitably have vanishingly small probability of success; I could imagine suddenly making the problem look tractable as happened for the atomic bomb; I’m pointing out that the problem seems to require much finer taskification than the SIAI research program sets out in order to be tractable.
Regarding (5): Supposing that the SIAI staff and/or donors decide that Friendly AI research turns out to have low utilitarian expected value, I could imagine easily imagine SIAI restructuring or rebranding to work toward higher utilitarian expected value activities.
Eliezer has done a lot to spread rationality between his upcoming rationality book, popular Harry Potter fanfiction and creation of Less Wrong. I’ve heard that the SIAI visiting fellows program has done a good job of building a community of people of high intellectual caliber devoted to existential risk reduction.
My understanding is that many of the recent papers (whether published or in progress) by Carl Shulman, Anna Salamon, Steve Rayhawk and Peter de Blanc as well as the SIAI Uncertain Future software application fall under the heading of advocacy/forecasting rather than Friendly AI research.
If I make a top level post about this subject that’s critical of Friendly AI research I’ll be sure to point out the many positive contributions of SIAI staff that fall outside of the domain of Friendly AI research.