The Charlie Stross example seems to be less than ideal. Much of what Stross has wrote about touches upon or deals intensely with issues connected to runaway AI. For example, the central premise of “Singularity Sky” involves an AI in the mid 20th century going from stuck in a lab to godlike in possibly a few seconds. His short story “Antibodies” focuses on the idea that very bad fast burns occur very frequently. He also has at least one (unpublished) story the central premises of which is that Von Neumann and Turing proved that P=NP and that the entire cold war was actually a way of keeping lots of weapons online ready to nuke any rogue AIs.
Note also that you mention Greg Egan who has also wrote fiction in which rogue AIs and bad nanotech make things very unpleasant (see for example Crystal Nights).
As to the other people you mention as to why they aren’t very worried about the possibilities that Eliezer takes seriously, at least one person on your list (Kurzweil) is an incredible optimist and not much of a rationalist and so it seems extremely unlikely that he would ever become convinced that any risk situation was of high likelyhood unless the evidence for the risk was close to overwhelming.
MWI, I’ve read this sequence and it seems that Eliezer makes one of the strongest cases for Many-Worlds that I’ve seen. However, I know that there are a lot of people who have thought about this issue and have much more physics background and have not reached this conclusion. I’m therefore extremely uncertain about MWI. So what should one do if one doesn’t know much about this? In this case, the answer is pretty easy, since MWI doesn’t alter actual behavior much (unless you are intending to engage in quantum suicide or the like). So figuring out whether Eliezer is correct about MWI should not be a high priority, except in so far as it provides a possible data point for deciding if Eliezer is correct about other things.
Advanced real-world molecular nanotechnology—Of the points you bring up this one seems to me to be the most unlikely to be actually correct. There are a lot of technical barriers to grey goo and most of the people actually working with nanotech don’t seem to see that sort of situation as very likely. But it also seems clear that that doesn’t mean that there aren’t many other possible things that molecular nanotech could do that wouldn’t make things very unpleasant for us. Here, Eliezer is by far not the only person worried about this. See for example, this article which is a few years of date but does show that there’s serious worry in this regards by academics and governments.
Runaway AI/AI going FOOM—This is potentially the most interesting of your points simply because it is so much more unique to the SIAI and Eliezer. So what can one do to figure out if this is correct? One thing to do is to examine the arguments and claims being made in detail. And see what other experts think on the subject. In this context, most AI people seem to consider this to be an unlikely problem, so maybe look at what they have to say? Note also that Robert Hanson of Overcoming Bias has discussed these issues extensively with Eliezer and has not been at all convinced (they had a written debate a while ago but I can’t find the link right now. If someone else can track it down I’d appreciate it). One thing to note is that estimates for nanotech can impact the chance of an AI going FOOM substantially. If cheap easy nanotech exists than an AI may be able to improve its hardware at a very fast rate. If however, such nanotech does not exist then an AI will be limited to self-improvement primarily by improving software, which might be much more limited. See this subthread, where I bring up some of the possible barriers to software improvement and become by the end of it substantially more convinced by cousin_it that the barriers to escalating software improvement may be small.
What about the other Bayesians out there? Are they simply not as literate as Eliezer Yudkowsky in the maths or maybe somehow teach but not use their own methods of reasoning and decision making?
Note that even practiced Bayesians are from from perfect rationalists. If one hasn’t thought about an issue or even considered that something is possible there’s not much one can do about it. Moreover, a fair number of people who self-identify as Bayesian rationalists aren’t very rational, and the set of people who do self-identify as such is pretty small.
Maybe after a few years of study I’ll know more. But right now, if I was forced to choose the future over the present, the SIAI or to have some fun. I’d have some fun.
Given your data set this seems reasonable to me. Frankly, if I were to give money or support the SIAI I would do so primarily because I think that the Singularity Summits are clearly helpful and getting together lots of smart people and that this is true even if one assigns a low probability for any Singularity type event occurring in the next 50 years.
Runaway AI/AI going FOOM—This is potentially the most interesting of your points simply because it is so much more unique to the SIAI and Eliezer. So what can one do to figure out if this is correct? One thing to do is to examine the arguments and claims being made in detail. And see what other experts think on the subject. In this context, most AI people seem to consider this to be an unlikely problem, so maybe look at what they have to say? Note also that Robert Hanson of Overcoming Bias has discussed these issues extensively with Eliezer and has not been at all convinced (they had a written debate a while ago but I can’t find the link right now. If someone else can track it down I’d appreciate it).
The Charlie Stross example seems to be less than ideal. Much of what Stross has wrote about touches upon or deals intensely with issues connected to runaway AI. For example, the central premise of “Singularity Sky” involves an AI in the mid 20th century going from stuck in a lab to godlike in possibly a few seconds. His short story “Antibodies” focuses on the idea that very bad fast burns occur very frequently. He also has at least one (unpublished) story the central premises of which is that Von Neumann and Turing proved that P=NP and that the entire cold war was actually a way of keeping lots of weapons online ready to nuke any rogue AIs.
Note also that you mention Greg Egan who has also wrote fiction in which rogue AIs and bad nanotech make things very unpleasant (see for example Crystal Nights).
As to the other people you mention as to why they aren’t very worried about the possibilities that Eliezer takes seriously, at least one person on your list (Kurzweil) is an incredible optimist and not much of a rationalist and so it seems extremely unlikely that he would ever become convinced that any risk situation was of high likelyhood unless the evidence for the risk was close to overwhelming.
MWI, I’ve read this sequence and it seems that Eliezer makes one of the strongest cases for Many-Worlds that I’ve seen. However, I know that there are a lot of people who have thought about this issue and have much more physics background and have not reached this conclusion. I’m therefore extremely uncertain about MWI. So what should one do if one doesn’t know much about this? In this case, the answer is pretty easy, since MWI doesn’t alter actual behavior much (unless you are intending to engage in quantum suicide or the like). So figuring out whether Eliezer is correct about MWI should not be a high priority, except in so far as it provides a possible data point for deciding if Eliezer is correct about other things.
Advanced real-world molecular nanotechnology—Of the points you bring up this one seems to me to be the most unlikely to be actually correct. There are a lot of technical barriers to grey goo and most of the people actually working with nanotech don’t seem to see that sort of situation as very likely. But it also seems clear that that doesn’t mean that there aren’t many other possible things that molecular nanotech could do that wouldn’t make things very unpleasant for us. Here, Eliezer is by far not the only person worried about this. See for example, this article which is a few years of date but does show that there’s serious worry in this regards by academics and governments.
Runaway AI/AI going FOOM—This is potentially the most interesting of your points simply because it is so much more unique to the SIAI and Eliezer. So what can one do to figure out if this is correct? One thing to do is to examine the arguments and claims being made in detail. And see what other experts think on the subject. In this context, most AI people seem to consider this to be an unlikely problem, so maybe look at what they have to say? Note also that Robert Hanson of Overcoming Bias has discussed these issues extensively with Eliezer and has not been at all convinced (they had a written debate a while ago but I can’t find the link right now. If someone else can track it down I’d appreciate it). One thing to note is that estimates for nanotech can impact the chance of an AI going FOOM substantially. If cheap easy nanotech exists than an AI may be able to improve its hardware at a very fast rate. If however, such nanotech does not exist then an AI will be limited to self-improvement primarily by improving software, which might be much more limited. See this subthread, where I bring up some of the possible barriers to software improvement and become by the end of it substantially more convinced by cousin_it that the barriers to escalating software improvement may be small.
Note that even practiced Bayesians are from from perfect rationalists. If one hasn’t thought about an issue or even considered that something is possible there’s not much one can do about it. Moreover, a fair number of people who self-identify as Bayesian rationalists aren’t very rational, and the set of people who do self-identify as such is pretty small.
Given your data set this seems reasonable to me. Frankly, if I were to give money or support the SIAI I would do so primarily because I think that the Singularity Summits are clearly helpful and getting together lots of smart people and that this is true even if one assigns a low probability for any Singularity type event occurring in the next 50 years.
FOOM Debate