Interviewer: So what’s your take on Ben Goertzel’s Cosmism, as expressed in “A Cosmist Manifesto”?
de Garis: Ben and I have essentially the same vision, i.e. that it’s the destiny of humanity to serve as the stepping-stone towards the creation of artilects. Where we differ is on the political front. I don’t share his optimism that the rise of the artilects will be peaceful. I think it will be extremely violent — an artilect war, killing billions of people.
Hmmm. I’m afraid I don’t share Goertzel’s optimism either. But then I don’t buy into that “destiny” stuff, either. We don’t have to destroy ourselves and the planet in this way. It is definitely not impossible, but super-human AGI is also not inevitable.
I’d be curious to hear from EY, and the rest of the “anti-death” brigade here, what they think of de Garis’s prognosis and whether and how they think an “artilect war” can be avoided.
I’d be curious to hear from EY, and the rest of the “anti-death” brigade here, what they think of de Garis’s prognosis and whether and how they think an “artilect war” can be avoided.
I haven’t read his book, etc., but I suspect that “storytelling” might be a reasonable characterization. On the other hand, my “I’d be curious” was hardly an attempt to create a burden of proof.
I do personally believe that convincing mankind that an FAI singularity is desirable will be a difficult task, and that many sane individuals might consider a unilateral and secret decision to FOOM as a casus belli. What would you do as Israeli PM if you received intelligence that an Iranian AI project would likely go FOOM sometime within the next two months?
It’s just silly. Luddites have never had much power—and aren’t usually very war like.
Instead, we will see expanded environmental and green movements, more anti-GM activism—demands to tax the techno-rich more—and so on.
Degaris was just doing much the same thing that SIAI is doing now—making a song-and-dance about THE END OF THE WORLD—in order to attract attention to himself—and so attract funding—so he could afford to get on with building his machines.
Thx. From that interview:
Hmmm. I’m afraid I don’t share Goertzel’s optimism either. But then I don’t buy into that “destiny” stuff, either. We don’t have to destroy ourselves and the planet in this way. It is definitely not impossible, but super-human AGI is also not inevitable.
I’d be curious to hear from EY, and the rest of the “anti-death” brigade here, what they think of de Garis’s prognosis and whether and how they think an “artilect war” can be avoided.
I’m not sure that’s where the burden of proof should fall. Has de Garis justified his claim? It sounds more like storytelling than inferential forecasting to me.
I really like your comments and wish you would make some top level posts and also contact me online. Could you please do so?
Where shall I contact you?
I haven’t read his book, etc., but I suspect that “storytelling” might be a reasonable characterization. On the other hand, my “I’d be curious” was hardly an attempt to create a burden of proof.
I do personally believe that convincing mankind that an FAI singularity is desirable will be a difficult task, and that many sane individuals might consider a unilateral and secret decision to FOOM as a casus belli. What would you do as Israeli PM if you received intelligence that an Iranian AI project would likely go FOOM sometime within the next two months?
It’s just silly. Luddites have never had much power—and aren’t usually very war like.
Instead, we will see expanded environmental and green movements, more anti-GM activism—demands to tax the techno-rich more—and so on.
Degaris was just doing much the same thing that SIAI is doing now—making a song-and-dance about THE END OF THE WORLD—in order to attract attention to himself—and so attract funding—so he could afford to get on with building his machines.