I think those problems weren’t caused by too much intelligence, but by too little. I know, intelligence enables these problems to form in the first place—These entities wouldn’t be making the problems if they weren’t volitional agents with intelligence, but that seems like a kind of cop-out complaint—Without intelligence there wouldn’t be any problems, sure, but there also wouldn’t be anything positive either, no concepts whatsoever.
Pollution is a great example: It’s intelligent thought that allowed us to start making machines that polluted. Intelligence allowed us to realize we could capitalize on the well-being of the environment and save money by trashing it.
More intelligence realizes that this is still a value trade off, that you aren’t getting something for nothing—Depending on the rate at which you do this, you could seriously damage yourself and the people around you for the trade-off. You have to weigh the costs with the benefits, and if the benefit is ‘some money’ and the cost is ‘destroying the world’, the intelligent choice becomes clear. To continue to act for the money isn’t intelligence, it’s just insanity, overpowering greed.
The cuban missile crisis may have been caused by intelligence building the structures that led up to it, but the solution wasn’t making everyone dumber so they couldn’t build that kind of thing—that just reduces overall utility. The solution is to act intelligently in ways that don’t destroy the world.
I see your point about moral intelligence being considered separately though, I hadn’t thought of that in the context. It’s a more elegant package to wrap everything up together, but not always the right thing to do… Thanks for the reply.
So you agree that yes, intelligence is continually generating “extra problems” for us to deal with. As you point out, many of the most pressing problems in the modern world are unforeseen consequences of useful technologies. You just believe that increases in human intelligence will invariably outpace the destructive power of the problems, whereas I don’t.
The premise of this diary was many earths, so I’d submit that certainly there are many earths for which the problem of nuclear warfare outpaced humanity’s capacity to intelligently deal with it, and that in the end we could very well share their fate.
I’ll also note that I fail to see how anyone could conclude from what I’ve written above that my prescription for humanity is stupid pills.
I’m glad we’ve hashed this out. I think that bias about the messianic/apocalyptic role of technology has largely been overlooked on this site, so I was glad to see this entry of Eliezer’s.
Regardless of whether or not they’re true I tend to think that arguments about the arc of history etc are profoundly counterproductive. People won’t vote if they think it’s a landslide, either for their guy or against. And I suspect I differ from others on this site in this respect, but I find it hard to get ginned up about cosmic endeavors, simply because they seem so remote from my experience.
And I don’t think we need an alternative! What I was trying to point out from the start was that increasing our predictive ability is necessary but not sufficient to save the world. Entirely selfish, entirely rational actors will doom the planet if we let them.
I think those problems weren’t caused by too much intelligence, but by too little. I know, intelligence enables these problems to form in the first place—These entities wouldn’t be making the problems if they weren’t volitional agents with intelligence, but that seems like a kind of cop-out complaint—Without intelligence there wouldn’t be any problems, sure, but there also wouldn’t be anything positive either, no concepts whatsoever.
Pollution is a great example: It’s intelligent thought that allowed us to start making machines that polluted. Intelligence allowed us to realize we could capitalize on the well-being of the environment and save money by trashing it.
More intelligence realizes that this is still a value trade off, that you aren’t getting something for nothing—Depending on the rate at which you do this, you could seriously damage yourself and the people around you for the trade-off. You have to weigh the costs with the benefits, and if the benefit is ‘some money’ and the cost is ‘destroying the world’, the intelligent choice becomes clear. To continue to act for the money isn’t intelligence, it’s just insanity, overpowering greed.
The cuban missile crisis may have been caused by intelligence building the structures that led up to it, but the solution wasn’t making everyone dumber so they couldn’t build that kind of thing—that just reduces overall utility. The solution is to act intelligently in ways that don’t destroy the world.
I see your point about moral intelligence being considered separately though, I hadn’t thought of that in the context. It’s a more elegant package to wrap everything up together, but not always the right thing to do… Thanks for the reply.
So you agree that yes, intelligence is continually generating “extra problems” for us to deal with. As you point out, many of the most pressing problems in the modern world are unforeseen consequences of useful technologies. You just believe that increases in human intelligence will invariably outpace the destructive power of the problems, whereas I don’t.
The premise of this diary was many earths, so I’d submit that certainly there are many earths for which the problem of nuclear warfare outpaced humanity’s capacity to intelligently deal with it, and that in the end we could very well share their fate.
I’ll also note that I fail to see how anyone could conclude from what I’ve written above that my prescription for humanity is stupid pills.
I agree completely. If intelligence-generated problems cannot outpace the solutions total destruction awaits.
I apologize if the stupid pill characterization feels wrong, I just was trying to think of a viable alternative to increasing intelligence.
I’m glad we’ve hashed this out. I think that bias about the messianic/apocalyptic role of technology has largely been overlooked on this site, so I was glad to see this entry of Eliezer’s.
Regardless of whether or not they’re true I tend to think that arguments about the arc of history etc are profoundly counterproductive. People won’t vote if they think it’s a landslide, either for their guy or against. And I suspect I differ from others on this site in this respect, but I find it hard to get ginned up about cosmic endeavors, simply because they seem so remote from my experience.
And I don’t think we need an alternative! What I was trying to point out from the start was that increasing our predictive ability is necessary but not sufficient to save the world. Entirely selfish, entirely rational actors will doom the planet if we let them.