Also, what is exactly Bostrom’s take on AI? OP says Bostrom disagrees with Eliezer. Could someone provide a link or reference to that? I have read most of Bostrom’s papers some time ago and at the moment I can’t recall any such disagreement.
I think Nick was near Anders with an x-risk of 20% conditional on AI development by 2100, and near 50% for AI by 2100. So the most likely known x-risk, although unknown x-risks get a big chunk of his probability mass.
If we are constructing a survey of AI-singularity thinking here, I would like to know more about the opinions of Hugo de Garis. And what Bill Joy is thinking these days?
If we are trying to estimate probabilities and effect multipliers, I would like to consider the following question: Consider the projected trajectory of human technological progress without AGI assistance. For example: controlled fusion by 2140, human lifespan doubles by 2200, self-sustaining human presence on asteroids and/or Jovian satelites by 2260, etc. How much would that rate of progress be speeded if we had the assistance of AGI intelligence with 10x human speed and memory capacity? 100x? 1000x?
I conjecture that these speed-ups would be much less than people here seem to expect, and that the speed-up difference between 100x and 100,000x would be small. Intelligence may be much less important than many people think.
Interviewer: So what’s your take on Ben Goertzel’s Cosmism, as expressed in “A Cosmist Manifesto”?
de Garis: Ben and I have essentially the same vision, i.e. that it’s the destiny of humanity to serve as the stepping-stone towards the creation of artilects. Where we differ is on the political front. I don’t share his optimism that the rise of the artilects will be peaceful. I think it will be extremely violent — an artilect war, killing billions of people.
Hmmm. I’m afraid I don’t share Goertzel’s optimism either. But then I don’t buy into that “destiny” stuff, either. We don’t have to destroy ourselves and the planet in this way. It is definitely not impossible, but super-human AGI is also not inevitable.
I’d be curious to hear from EY, and the rest of the “anti-death” brigade here, what they think of de Garis’s prognosis and whether and how they think an “artilect war” can be avoided.
I’d be curious to hear from EY, and the rest of the “anti-death” brigade here, what they think of de Garis’s prognosis and whether and how they think an “artilect war” can be avoided.
I haven’t read his book, etc., but I suspect that “storytelling” might be a reasonable characterization. On the other hand, my “I’d be curious” was hardly an attempt to create a burden of proof.
I do personally believe that convincing mankind that an FAI singularity is desirable will be a difficult task, and that many sane individuals might consider a unilateral and secret decision to FOOM as a casus belli. What would you do as Israeli PM if you received intelligence that an Iranian AI project would likely go FOOM sometime within the next two months?
It’s just silly. Luddites have never had much power—and aren’t usually very war like.
Instead, we will see expanded environmental and green movements, more anti-GM activism—demands to tax the techno-rich more—and so on.
Degaris was just doing much the same thing that SIAI is doing now—making a song-and-dance about THE END OF THE WORLD—in order to attract attention to himself—and so attract funding—so he could afford to get on with building his machines.
Consider the projected trajectory of human technological progress without AGI assistance. For example: controlled fusion by 2140, human lifespan doubles by 2200, self-sustaining human presence on asteroids and/or Jovian satelites by 2260, etc. How much would that rate of progress be speeded if we had the assistance of AGI intelligence with 10x human speed and memory capacity? 100x? 1000x?
I don’t think you can say. Different things will accelerate at different rates. For example, a dog won’t build a moon rocket in a million years—but if you make it 10 times smarter, it might do that pretty quickly.
This might be an opportunity to use one of those Debate Tools, see if one of them can be useful for mapping the disagreement.
I would like to have a short summary of where various people stand on the various issues.
The people:
Eliezer
Ben
Robin Hanson
Nick Bostrom
Ray Kurzweil ?
Other academic AGI types?
Other vocal people on the net like Tim Tyler ?
The issues:
How likely is a human-level AI to go FOOM?
How likely is an AGI developed without “friendliness theory” to have values incompatible with those of humans?
How easy is it to make an AGI (really frickin’ hard, or really really really frickin’ hard?)?
How likely is it that Ben Goerzel’s “toddler AGI” would succeed, if he gets funding etc.?
How likely is it that Ben Goerzel’s “toddler AGI” would be dangerous, if he succeeded?
How likely is it that some group will develop an AGI before 2050? (Or more generally, estimated timelines of AGI)
Add Nick Bostrom to the list.
Also, what is exactly Bostrom’s take on AI? OP says Bostrom disagrees with Eliezer. Could someone provide a link or reference to that? I have read most of Bostrom’s papers some time ago and at the moment I can’t recall any such disagreement.
I think Nick was near Anders with an x-risk of 20% conditional on AI development by 2100, and near 50% for AI by 2100. So the most likely known x-risk, although unknown x-risks get a big chunk of his probability mass.
If we are constructing a survey of AI-singularity thinking here, I would like to know more about the opinions of Hugo de Garis. And what Bill Joy is thinking these days?
If we are trying to estimate probabilities and effect multipliers, I would like to consider the following question: Consider the projected trajectory of human technological progress without AGI assistance. For example: controlled fusion by 2140, human lifespan doubles by 2200, self-sustaining human presence on asteroids and/or Jovian satelites by 2260, etc. How much would that rate of progress be speeded if we had the assistance of AGI intelligence with 10x human speed and memory capacity? 100x? 1000x?
I conjecture that these speed-ups would be much less than people here seem to expect, and that the speed-up difference between 100x and 100,000x would be small. Intelligence may be much less important than many people think.
A recent update from Hugo here. He has retired—but says he has one more book on machine intelligence to go.
Thx. From that interview:
Hmmm. I’m afraid I don’t share Goertzel’s optimism either. But then I don’t buy into that “destiny” stuff, either. We don’t have to destroy ourselves and the planet in this way. It is definitely not impossible, but super-human AGI is also not inevitable.
I’d be curious to hear from EY, and the rest of the “anti-death” brigade here, what they think of de Garis’s prognosis and whether and how they think an “artilect war” can be avoided.
I’m not sure that’s where the burden of proof should fall. Has de Garis justified his claim? It sounds more like storytelling than inferential forecasting to me.
I really like your comments and wish you would make some top level posts and also contact me online. Could you please do so?
Where shall I contact you?
I haven’t read his book, etc., but I suspect that “storytelling” might be a reasonable characterization. On the other hand, my “I’d be curious” was hardly an attempt to create a burden of proof.
I do personally believe that convincing mankind that an FAI singularity is desirable will be a difficult task, and that many sane individuals might consider a unilateral and secret decision to FOOM as a casus belli. What would you do as Israeli PM if you received intelligence that an Iranian AI project would likely go FOOM sometime within the next two months?
It’s just silly. Luddites have never had much power—and aren’t usually very war like.
Instead, we will see expanded environmental and green movements, more anti-GM activism—demands to tax the techno-rich more—and so on.
Degaris was just doing much the same thing that SIAI is doing now—making a song-and-dance about THE END OF THE WORLD—in order to attract attention to himself—and so attract funding—so he could afford to get on with building his machines.
I don’t think you can say. Different things will accelerate at different rates. For example, a dog won’t build a moon rocket in a million years—but if you make it 10 times smarter, it might do that pretty quickly.