This is a separate debate, but I think that you overestimate the ability of the general public, and society at large to be sane about existential risks, and AI risks especially.
I don’t think it’s unreasonable to hope that society can eventually get to a point where being an existential risk researcher has status similar to being a physics researcher.
There’s nothing intrinsically weird about the idea “there are things that could cause the extinction of the human race and it’s a good idea to have some people studying them and thinking about how to avoid them.” I think that the reason that general artificial intelligence research has such a bad reputation is that it’s associated with a history of false alarms. I think that by adopting a gradualist approach of getting more and more of the intellectual elite to think about existential risk, it should be possible to gradually change attitudes about artificial intelligence research. I worry that SIAI might sound another “false alarm” or have institutional problems which further damage the credibility of existential risk research.
My remark is related to the top level post. From your top level post it’s clear that at the moment there are very strong negative pressures against people studying existential risk. I wish there weren’t such pressures but they’re there. It’s plausible to me that these pressures make it much more difficult for you to existential risk research than you would be if existential risk research were more mainstream. It’s also plausible to me that there are people who have something in common with you but who are unable to bear these pressures and so are deterred from working with you.
For this reason, I think that the best way to facilitate existential risk research is to
(a) Raise levels of public interest in making the world a better place. A very large of majority of the people so influenced will not work toward or fund existential risk research, but a small percentage will.
(b) Get the educated public (the sorts of people who read semi scholarly books) interested in existential risk.
(c) Get established scientific experts more interested in existential risk.
In order to accomplish (b) and (c), I think that it’s important for an existential risk organization to avoid any appearance of cultishness.
A fundamental problem is that there seems to be a strong positive correlation between having interest in existential risk and having high functioning Aspergers’ syndrome, and a strong negative correlation between having high functioning Aspergers’ syndrome and having good marketing skills. I think that this issue is the main reason that existential risk research has such low status relative to its importance. Not sure what can be done about this.
I think the problem is that the public is like a reinforcement learner, and won’t believe claims that are based on long chains of reasoning. Rather, the public and society at large tends to wait for the thing in question to actually happen, so that they have “proof”.
Physics is OK because it has repeatedly proved both its value in making novel and astounding predictions that were then proved correct, and because those predictions had important practical consequences. Though there are clear exceptions where dreadful public epistemology has impacted physics: overreaction to the dangers of nuclear power being one.
I think there’s a fundamental point about how public epistemology works that I want to make here: the public operates like a dumb agent that is paranoid about not being tricked, and demands real physical proof of things when the bayesian probability with respect to a reasonable prior is already 99.9999… %. Widespread denial of evolution is one case; you can’t show someone an ape evolving into a human.
the public operates like a dumb agent that is paranoid about not being tricked
Good point! Perhaps part of the problem is that the public has been subjected to at least two millenia of warnings of existential risks—by the clergy… That’s long enough, and the false alarms have been frequent enough and intense enough, that perhaps we have even genetically evolved some extra skepticism about them.
But do we (i.e. the human race in general) have any more skepticism about such claims than we used to? Most people still do believe in religions that include some form of eschatology.
It might just be that scientific talk about existential risk seems like a competing meme to religious people (you’re not allowed to believe in something that says the world won’t end the way your religion says it will), while non-religious people may tend to see discussion of global catastrophe as in the genre of apocalyptic religion.
(Then again, global warming doesn’t seem to have that problem, so maybe it’s just a marketing issue...)
A fundamental problem is that there seems to be a strong positive correlation between having interest in existential risk and having high functioning Aspergers’ syndrome, and a strong negative correlation between having high functioning Aspergers’ syndrome and having good marketing skills.
Couldn’t this be corrected by hiring a Marketing firm? High Functioning Aspergers’ can see the link from “hiring a Marketing firm” to “getting the public to believe nearly anything” is very strong and very reliable. It takes only a few tens of millions of dollars to convince the public to commit to billions of dollars in near-future losses (eg: tobacco industry, carbon polluters, election drives).
This may not be desirable, but it is a fact, and if a rational agent wants to win then s/he should accept the fact and design with it.
Another problem I want to mention: getting “established scientific experts” to take existential risk seriously is impeded by the fact that academia has no mechanism for assessing value of information. Academics are rewarded based upon how true the info they generate is, not on a combination of how true it is and how important it is. So we have more papers on dung beetle reproduction than on human extinction.
Furthermore, academia is utterly paranoid about not causing the utterly dumb public to mistrust it, so it has to adhere to the public’s standards about needing real physical proof for outlandish claims, rather than reasoning probabilistically about them using long, complex and somewhat subjective arguments.
Lastly, to complicate things even more, academia is chaos. Nobody is in charge. It is inherently conservative and slow to change, even when there is real physical proff that it is mistaken—most bad theories are buried along with their owners years after they have been shown to have a miniscule bayesian probability.
Now there are a few academics at Oxford University doing x-risk research. But to grow that community to 1000′s of researchers is going to be either very expensive and quite slow, or free and glacially slow.
From your top level post it’s clear that at the moment there are very strong negative pressures against people studying existential risk. I wish there weren’t such pressures but they’re there. It’s plausible to me that these pressures make it much more difficult for you to existential risk research than you would be if existential risk research were more mainstream.
I would phrase this differently. Certain types of existential risks (nuclear war, asteroid impacts) seem to be studied in the mainstream. Perhaps the study of AGI-related existential risks is the key area pushed out of the mainstream?
I don’t think it’s unreasonable to hope that society can eventually get to a point where being an existential risk researcher has status similar to being a physics researcher.
There’s nothing intrinsically weird about the idea “there are things that could cause the extinction of the human race and it’s a good idea to have some people studying them and thinking about how to avoid them.” I think that the reason that general artificial intelligence research has such a bad reputation is that it’s associated with a history of false alarms. I think that by adopting a gradualist approach of getting more and more of the intellectual elite to think about existential risk, it should be possible to gradually change attitudes about artificial intelligence research. I worry that SIAI might sound another “false alarm” or have institutional problems which further damage the credibility of existential risk research.
My remark is related to the top level post. From your top level post it’s clear that at the moment there are very strong negative pressures against people studying existential risk. I wish there weren’t such pressures but they’re there. It’s plausible to me that these pressures make it much more difficult for you to existential risk research than you would be if existential risk research were more mainstream. It’s also plausible to me that there are people who have something in common with you but who are unable to bear these pressures and so are deterred from working with you.
For this reason, I think that the best way to facilitate existential risk research is to
(a) Raise levels of public interest in making the world a better place. A very large of majority of the people so influenced will not work toward or fund existential risk research, but a small percentage will.
(b) Get the educated public (the sorts of people who read semi scholarly books) interested in existential risk.
(c) Get established scientific experts more interested in existential risk.
In order to accomplish (b) and (c), I think that it’s important for an existential risk organization to avoid any appearance of cultishness.
A fundamental problem is that there seems to be a strong positive correlation between having interest in existential risk and having high functioning Aspergers’ syndrome, and a strong negative correlation between having high functioning Aspergers’ syndrome and having good marketing skills. I think that this issue is the main reason that existential risk research has such low status relative to its importance. Not sure what can be done about this.
I think the problem is that the public is like a reinforcement learner, and won’t believe claims that are based on long chains of reasoning. Rather, the public and society at large tends to wait for the thing in question to actually happen, so that they have “proof”.
Physics is OK because it has repeatedly proved both its value in making novel and astounding predictions that were then proved correct, and because those predictions had important practical consequences. Though there are clear exceptions where dreadful public epistemology has impacted physics: overreaction to the dangers of nuclear power being one.
I think there’s a fundamental point about how public epistemology works that I want to make here: the public operates like a dumb agent that is paranoid about not being tricked, and demands real physical proof of things when the bayesian probability with respect to a reasonable prior is already 99.9999… %. Widespread denial of evolution is one case; you can’t show someone an ape evolving into a human.
Good point! Perhaps part of the problem is that the public has been subjected to at least two millenia of warnings of existential risks—by the clergy… That’s long enough, and the false alarms have been frequent enough and intense enough, that perhaps we have even genetically evolved some extra skepticism about them.
But do we (i.e. the human race in general) have any more skepticism about such claims than we used to? Most people still do believe in religions that include some form of eschatology.
It might just be that scientific talk about existential risk seems like a competing meme to religious people (you’re not allowed to believe in something that says the world won’t end the way your religion says it will), while non-religious people may tend to see discussion of global catastrophe as in the genre of apocalyptic religion.
(Then again, global warming doesn’t seem to have that problem, so maybe it’s just a marketing issue...)
Couldn’t this be corrected by hiring a Marketing firm? High Functioning Aspergers’ can see the link from “hiring a Marketing firm” to “getting the public to believe nearly anything” is very strong and very reliable. It takes only a few tens of millions of dollars to convince the public to commit to billions of dollars in near-future losses (eg: tobacco industry, carbon polluters, election drives).
This may not be desirable, but it is a fact, and if a rational agent wants to win then s/he should accept the fact and design with it.
Another problem I want to mention: getting “established scientific experts” to take existential risk seriously is impeded by the fact that academia has no mechanism for assessing value of information. Academics are rewarded based upon how true the info they generate is, not on a combination of how true it is and how important it is. So we have more papers on dung beetle reproduction than on human extinction.
Furthermore, academia is utterly paranoid about not causing the utterly dumb public to mistrust it, so it has to adhere to the public’s standards about needing real physical proof for outlandish claims, rather than reasoning probabilistically about them using long, complex and somewhat subjective arguments.
Lastly, to complicate things even more, academia is chaos. Nobody is in charge. It is inherently conservative and slow to change, even when there is real physical proff that it is mistaken—most bad theories are buried along with their owners years after they have been shown to have a miniscule bayesian probability.
Now there are a few academics at Oxford University doing x-risk research. But to grow that community to 1000′s of researchers is going to be either very expensive and quite slow, or free and glacially slow.
I would phrase this differently. Certain types of existential risks (nuclear war, asteroid impacts) seem to be studied in the mainstream. Perhaps the study of AGI-related existential risks is the key area pushed out of the mainstream?