Yet you SIAI S^ invite these proponents of global suicide by AI, K-type S^, to your conferences and give them standing ovations.
…
You are a deeply schizophrenic little culture, which for a sociologist like me is just fascinating.
This is a perfect example of where the ‘outside view’ can go wrong. Even the most basic ‘inside view’ of the topic would make it overwhelmingly obvious why a “75% certain of death by AI” folks could be allied (or the same people!) as the “solve all problems through AI” group. Splitting the two positions prematurely and trying to make a simple model of political adversity like that is just naive.
I personally guess >= 75% for AI death and also advocate FAI research. Preventing AI development indefinitely via desperate politico-military struggle would just not work in the long term. Trying would be utter folly. Nevermind the even longer term which would probably result in undesirable outcomes even if humanity did manage to artificially stunt its own progress in such a manner.
I don’t think James Hughes would present or believe in that particular low-quality analysis himself either, if he didn’t feel that SIAI is an organization competing with his IEET for popularity within the transhumanist subculture.
So mostly that statement is probably just about using “divide and conquer” towards transhumanists/singularitarians who are currently more popular within the transhumanist subculture than he is.
Preventing AI development indefinitely via desperate politico-military struggle would just not work in the long term. Trying would be utter folly.
This reminds me of Charles Stross’ “why even try—AI can’t be stopped” (my paraphrase).
Surely if it buys a little extra time for a FAI singleton to succeed, desperate struggle to suppress other lines of dangerously-near-culmination AI research would seem incumbent. I guess this might be one of those scary (and sufficiently distant) things nobody wants to advertise.
If you estimate a high chance of this action destroying humanity then trying to get through that bottleneck with slightly better than 75% chance of surviving is almost certainly better than trying to stamp out such research and buying a few years in exchange for replacing 75% with a near certainty. The only argument against that I can see if one accepts the 75% number is that forced delaying until we have uploads might help matters since uploads would have moral systems close to those of their original humans, and uploads can will have a better chance at solving the FAI problem or if not solving it being able to counteract any unFriendly or unfriendly AI.
AI research is hard. It’s not clear to me that a serious, global ban on AI would only delay the arrival of AGI by a few years. Communication, collaboration, recruitment, funding… all of these would be much more difficulty. Even more, since current AI researchers are open about their work they would be the easiest to track after a ban, thus any new AI research would have to come from green researchers.
That aside, I agree that a ban whose goal is simply indefinite postponement of AGI is unlikely to work (and I’m dubious of any ban in general). Still, it isn’t hard for me to imagine that a ban could buy us 10 years, and that a similar amount of political might could also greatly accelerate an upload project.
The biggest argument against, in my opinion, is that the only way the political will could be formed is if the threat of AGI was already so imminent that a ban really would be worse than worthless.
AI research is hard. It’s not clear to me that a serious, global ban on AI would only delay the arrival of AGI by a few years.
The other thing to consider is just what the ban would achieve. I would expect it to lower the 75% chance by giving us the opportunity to go extinct in another way before making a mistake with AI. When I say ‘extinct’ I include (d)evolving to an equilibrium (such as those described by Robin Hanson from time to time).
How well defined is AI research? My assumption is that if AI is reasonably possible for humans to create, then it’s going to become much easier as computers become more powerful and human minds and brains become better understood.
A ban seems highly implausible to me. What is the case for considering it? Do you really think that enough people will become convinced that there is a significant danger?
I agree, it seems highly implausible to me as well. However, the subject at hand (AI, AGI, FAI, uploads, etc) is riddled with extremes, so I’m hesitant to throw out any possibility simply because it would be incredibly difficult.
Do you really think that enough people will become convinced that there is a significant danger?
See the last line of the comment you responded to.
This is a perfect example of where the ‘outside view’ can go wrong. Even the most basic ‘inside view’ of the topic would make it overwhelmingly obvious why a “75% certain of death by AI” folks could be allied (or the same people!) as the “solve all problems through AI” group. Splitting the two positions prematurely and trying to make a simple model of political adversity like that is just naive.
I personally guess >= 75% for AI death and also advocate FAI research. Preventing AI development indefinitely via desperate politico-military struggle would just not work in the long term. Trying would be utter folly. Nevermind the even longer term which would probably result in undesirable outcomes even if humanity did manage to artificially stunt its own progress in such a manner.
(The guy also uses ‘schizophrenic’ incorrectly.)
I don’t think James Hughes would present or believe in that particular low-quality analysis himself either, if he didn’t feel that SIAI is an organization competing with his IEET for popularity within the transhumanist subculture.
So mostly that statement is probably just about using “divide and conquer” towards transhumanists/singularitarians who are currently more popular within the transhumanist subculture than he is.
James Hughes seems like a fine fellow to me—and his SIAI disagreements seem fairly genuine. It is much of the rest of IEET that is the problem.
This reminds me of Charles Stross’ “why even try—AI can’t be stopped” (my paraphrase).
Surely if it buys a little extra time for a FAI singleton to succeed, desperate struggle to suppress other lines of dangerously-near-culmination AI research would seem incumbent. I guess this might be one of those scary (and sufficiently distant) things nobody wants to advertise.
What does “75% certain of death by AI” mean?
Greater folly than letting happen something with greater than a 75% chance of destroying the human race?
If you estimate a high chance of this action destroying humanity then trying to get through that bottleneck with slightly better than 75% chance of surviving is almost certainly better than trying to stamp out such research and buying a few years in exchange for replacing 75% with a near certainty. The only argument against that I can see if one accepts the 75% number is that forced delaying until we have uploads might help matters since uploads would have moral systems close to those of their original humans, and uploads can will have a better chance at solving the FAI problem or if not solving it being able to counteract any unFriendly or unfriendly AI.
AI research is hard. It’s not clear to me that a serious, global ban on AI would only delay the arrival of AGI by a few years. Communication, collaboration, recruitment, funding… all of these would be much more difficulty. Even more, since current AI researchers are open about their work they would be the easiest to track after a ban, thus any new AI research would have to come from green researchers.
That aside, I agree that a ban whose goal is simply indefinite postponement of AGI is unlikely to work (and I’m dubious of any ban in general). Still, it isn’t hard for me to imagine that a ban could buy us 10 years, and that a similar amount of political might could also greatly accelerate an upload project.
The biggest argument against, in my opinion, is that the only way the political will could be formed is if the threat of AGI was already so imminent that a ban really would be worse than worthless.
The other thing to consider is just what the ban would achieve. I would expect it to lower the 75% chance by giving us the opportunity to go extinct in another way before making a mistake with AI. When I say ‘extinct’ I include (d)evolving to an equilibrium (such as those described by Robin Hanson from time to time).
How well defined is AI research? My assumption is that if AI is reasonably possible for humans to create, then it’s going to become much easier as computers become more powerful and human minds and brains become better understood.
A ban seems highly implausible to me. What is the case for considering it? Do you really think that enough people will become convinced that there is a significant danger?
I agree, it seems highly implausible to me as well. However, the subject at hand (AI, AGI, FAI, uploads, etc) is riddled with extremes, so I’m hesitant to throw out any possibility simply because it would be incredibly difficult.
See the last line of the comment you responded to.