This is a good discussion. I see this whole issue as a power struggle, and I don’t consider the Singularity Institute to be more benevolent than anyone else just because Eliezer Yudkowsky has written a paper about “CEV” (whatever that is—I kept falling asleep when I tried to read it, and couldn’t make heads or tails of it in any case).
The megalomania of the SIAI crowd in claiming that they are the world-savers would worry me if I thought they might actually pull something off. For the sake of my peace of mind, I have formed an organization which is pursuing an AI world domination agenda of our own. At some point we might even write a paper explaining why our approach is the only ethically defensible means to save humanity from extermination. My working hypothesis is that AGI will be similar to nuclear weapons, in that it will be the culmination of a global power struggle (which has already started). Crazy old world, isn’t it?
The megalomania of the SIAI crowd in claiming that they are the world-savers would worry me if I thought they might actually pull something off.
I also think they look rather ineffectual from the outside. On the other hand they apparently keep much of their actual research secret—reputedly for fears that it will be used to do bad things—which makes them something of an unknown quantity.
I am pretty sceptical about them getting very far with their projects—but they are certainly making an interesting sociological phenomenon in the mean time!
This is a good discussion. I see this whole issue as a power struggle, and I don’t consider the Singularity Institute to be more benevolent than anyone else just because Eliezer Yudkowsky has written a paper about “CEV” (whatever that is—I kept falling asleep when I tried to read it, and couldn’t make heads or tails of it in any case).
The megalomania of the SIAI crowd in claiming that they are the world-savers would worry me if I thought they might actually pull something off. For the sake of my peace of mind, I have formed an organization which is pursuing an AI world domination agenda of our own. At some point we might even write a paper explaining why our approach is the only ethically defensible means to save humanity from extermination. My working hypothesis is that AGI will be similar to nuclear weapons, in that it will be the culmination of a global power struggle (which has already started). Crazy old world, isn’t it?
I also think they look rather ineffectual from the outside. On the other hand they apparently keep much of their actual research secret—reputedly for fears that it will be used to do bad things—which makes them something of an unknown quantity.
I am pretty sceptical about them getting very far with their projects—but they are certainly making an interesting sociological phenomenon in the mean time!