Here is an interesting comment related to this idea:
What I find a continuing source of amazement is that there is a subculture of people half of whom believe that AI will lead to the solving of all mankind’s problems (which me might call Kurzweilian S^) and the other half of which is more or less certain (75% certain) that it will lead to annihilation. Lets call the latter the SIAI S^.
Yet you SIAI S^ invite these proponents of global suicide by AI, K-type S^, to your conferences and give them standing ovations.
And instead of waging desperate politico-military struggle to stop all this suicidal AI research you cheerlead for it, and focus your efforts on risk mitigation on discussions of how a friendly god-like AI could save us from annihilation.
You are a deeply schizophrenic little culture, which for a sociologist like me is just fascinating.
But as someone deeply concerned about these issues I find the irrationality of the S^ approach to a-life and AI threats deeply troubling. -- James J. Hughes (existential.ieet.org mailing list, 2010-07-11)
It is impossible for a rational person to both believe in imminent rise of sea levels and purchase ocean-front property.
It is reported that former Vice President Al Gore just purchased a villa in Montecito, California for $8.875 million. The exact address is not revealed, but Montecito is a relatively narrow strip bordering the Pacific Ocean. So its minimum elevation above sea level is 0 feet, while its overall elevation is variously reported at 50ft and 180ft. At the same time, Mr. Gore prominently sponsors a campaign and award-winning movie that warns that, due to Global Warming, we can expect to see nearby ocean-front locations, such as San Francisco, largely under water. The elevation of San Francisco is variously reported at 52ft up to high of 925ft.
Ask yourself, wouldn’t you fly a plane into a tower if that was the only way to disable Skynet? The difference between religion and the risk of uFAI makes it even more dangerous. This crowd is actually highly intelligent and their incentive based on more than fairy tales told by goatherders. And if dumb people are already able to commit large-scale atrocities based on such nonsense, what are a bunch of highly-intelligent and devoted geeks who see a tangible danger able and willing to do? More so as in this case the very same people who believe it are the ones who think they must act themselves because their God doesn’t even exist yet.
And instead of waging desperate politico-military struggle to stop all this suicidal AI research you cheerlead for it, and focus your efforts on risk mitigation on discussions of how a friendly god-like AI could save us from annihilation.
This is one of those good critiques of SIAI strategy that no one ever seems to make. I don’t know why. More good critiques would be awesome. Voted up.
I don’t really know the SIAI people, but I have the impression that they’re not against AI at all. Sure, an unfriendly AI would be awful—but a friendly one would be awesome. And they probably think AI is inevitable, anyway.
This is true as far as it goes; however if you actually visit SIAI, you may find significantly more worry about UFAI in the short term than you would have expected just from reading Eliezer Yudkowsky’s writings.
I think that you interacted most with a pretty uncharacteristically biased sample of characters: most of the long-term SIAI folk have longer timelines than good ol’ me and Justin by about 15-20 years. That said, it’s true that everyone is still pretty worried about AI-soon, no matter the probability.
Well, 15-20 years doesn’t strike me as that much of a time difference, actually. But in any case I was really talking about my surprise at the amount of emphasis on “preventing UFAI” as opposed to “creating FAI”. Do you suppose that’s also reflective of a biased sample?
Well, 15-20 years doesn’t strike me as that much of a time difference, actually.
Really? I mean, relative to your estimate it might not be big, but absolutely speaking, doom 15 years versus doom 35 years seems to make a huge difference in expected utility.
Do you suppose that’s also reflective of a biased sample?
Probably insofar as Eliezer and Marcello weren’t around: FAI and the Visiting Fellows intersect at decision theory only. But the more direct (and potentially dangerous) AGI stuff isn’t openly discussed for obvious reasons.
relative to your estimate it might not be big, but absolutely speaking, doom 15 years versus doom 35 years seems to make a huge difference in expected utility.
A good point. By the way, I should mention that I updated my estimate after it was pointed out to me that other folks’ estimates were taking Outside View considerations into account, and after I learned I had been overestimating the information-theoretic complexity of existing minds. FOOM before 2100 looks significantly more likely to me now than it did before.
Probably insofar as Eliezer and Marcello weren’t around: FAI and the Visiting Fellows intersect at decision theory only.
Well I didn’t expect that AGI technicalities would be discussed openly, of course. What I’m thinking of is Eliezer’s attitude that (for now) AGI is unlikely to be developed by anyone not competent enough to realize Friendliness is a problem, versus the apparent fear among some other people that AGI might be cobbled together more or less haphazardly, even in the near term.
Eliezer’s attitude that (for now) AGI is unlikely to be developed by anyone not competent enough to realize Friendliness is a problem
Huh. I didn’t get that from the sequences, perhaps I should check again. It always seemed to me as if he saw AGI as really frickin’ hard but not excessively so, whereas Friendliness is the Impossible Problem made up of smaller but also impossible problems.
I don’t really know the SIAI people, but I have the impression that they’re not against AI at all. Sure, an unfriendly AI would be awful—but a friendly one would be awesome.
True. I know the SIAI people pretty well (I’m kind of one of them) and can confirm they agree. But they’re pretty heavily against uFAI development, which is what I thought XiXiDu’s quote was talking about.
And they probably think AI is inevitable, anyway.
Well… hopefully not, in a sense. SIAI’s working to improve widespread knowledge of the need for Friendliness among AGI researchers. It’s inevitable (barring a global catastrophe), but they’re hoping to make FAI more inevitable than uFAI.
As someone who was a volunteered for SIAI at the Singularity Summit, a critique of SIAI could be to ask why we’re letting people who aren’t concerned about uFAI speak at our conferences and affiliate with our memes. I think there are good answers to that critique, but the critique itself is a pretty reasonable one. Most complains about SIAI are comparatively maddeningly irrational (in my own estimation).
A stronger criticism, I think, is why the only mention of friendliness at the Summit was some very veiled hints in Eliezer’s speech. Again, I think there are good reasons, but not good reasons that a lot of people know, so I don’t understand why people bring up other criticisms before this one.
This was meant as a critique too. But people here seem not to believe what they preach, or they would follow their position taken to its logical extreme.
Here is an interesting comment related to this idea:
Also reminds me of this:
I’ve highlighted the same idea before by the way:
This is one of those good critiques of SIAI strategy that no one ever seems to make. I don’t know why. More good critiques would be awesome. Voted up.
I don’t really know the SIAI people, but I have the impression that they’re not against AI at all. Sure, an unfriendly AI would be awful—but a friendly one would be awesome. And they probably think AI is inevitable, anyway.
This is true as far as it goes; however if you actually visit SIAI, you may find significantly more worry about UFAI in the short term than you would have expected just from reading Eliezer Yudkowsky’s writings.
I think that you interacted most with a pretty uncharacteristically biased sample of characters: most of the long-term SIAI folk have longer timelines than good ol’ me and Justin by about 15-20 years. That said, it’s true that everyone is still pretty worried about AI-soon, no matter the probability.
Well, 15-20 years doesn’t strike me as that much of a time difference, actually. But in any case I was really talking about my surprise at the amount of emphasis on “preventing UFAI” as opposed to “creating FAI”. Do you suppose that’s also reflective of a biased sample?
Really? I mean, relative to your estimate it might not be big, but absolutely speaking, doom 15 years versus doom 35 years seems to make a huge difference in expected utility.
Probably insofar as Eliezer and Marcello weren’t around: FAI and the Visiting Fellows intersect at decision theory only. But the more direct (and potentially dangerous) AGI stuff isn’t openly discussed for obvious reasons.
A good point. By the way, I should mention that I updated my estimate after it was pointed out to me that other folks’ estimates were taking Outside View considerations into account, and after I learned I had been overestimating the information-theoretic complexity of existing minds. FOOM before 2100 looks significantly more likely to me now than it did before.
Well I didn’t expect that AGI technicalities would be discussed openly, of course. What I’m thinking of is Eliezer’s attitude that (for now) AGI is unlikely to be developed by anyone not competent enough to realize Friendliness is a problem, versus the apparent fear among some other people that AGI might be cobbled together more or less haphazardly, even in the near term.
Huh. I didn’t get that from the sequences, perhaps I should check again. It always seemed to me as if he saw AGI as really frickin’ hard but not excessively so, whereas Friendliness is the Impossible Problem made up of smaller but also impossible problems.
True. I know the SIAI people pretty well (I’m kind of one of them) and can confirm they agree. But they’re pretty heavily against uFAI development, which is what I thought XiXiDu’s quote was talking about.
Well… hopefully not, in a sense. SIAI’s working to improve widespread knowledge of the need for Friendliness among AGI researchers. It’s inevitable (barring a global catastrophe), but they’re hoping to make FAI more inevitable than uFAI.
As someone who was a volunteered for SIAI at the Singularity Summit, a critique of SIAI could be to ask why we’re letting people who aren’t concerned about uFAI speak at our conferences and affiliate with our memes. I think there are good answers to that critique, but the critique itself is a pretty reasonable one. Most complains about SIAI are comparatively maddeningly irrational (in my own estimation).
A stronger criticism, I think, is why the only mention of friendliness at the Summit was some very veiled hints in Eliezer’s speech. Again, I think there are good reasons, but not good reasons that a lot of people know, so I don’t understand why people bring up other criticisms before this one.
This was meant as a critique too. But people here seem not to believe what they preach, or they would follow their position taken to its logical extreme.
This seems to me a good strategy for SIAI people to persuade K-type people to join them.