Intelligence is also not the solution to all other problems we face.
Not all of them—most of them. War, hunger, energy limits, resource shortages, space travel, loss of loved ones—and so on. It probably won’t fix the speed of light limit, though.
Not all of them—most of them. War, hunger, energy limits, resource shortages, space travel, loss of loved ones—and so on. It probably won’t fix the speed of light limit, though.
What makes you reach this conclusion? How can you think any of these problems can be solved by intelligence when none of them have been solved? I’m particularly perplexed by the claim that war would be solved by higher intelligence. Many wars are due to ideological priorities. I don’t see how you can expect necessarily (or even with high probability) that ideologues will be less inclined to go to war if they are smarter.
I’m particularly perplexed by the claim that war would be solved by higher
intelligence. Many wars are due to ideological priorities. I don’t see how you
can expect necessarily (or even with high probability) that ideologues will be
less inclined to go to war if they are smarter.
Violence has been declining on (pretty much) every timescale: Steven Pinker: Myth of Violence. I think one could argue that this is because of greater collective intelligence of human race.
I’m particularly perplexed by the claim that war would be solved by higher intelligence. Many wars are due to ideological priorities. I don’t see how you can expect necessarily (or even with high probability) that ideologues will be less inclined to go to war if they are smarter.
War won’t be solved by making everyone smarter, but it will be solved if a sufficiently powerful friendly AI takes over, as a singleton, because it would be powerful enough to stop everyone else from using force.
Yes, that makes sense, but in context I don’t think that’s what was meant since Tim is one of the people here is more skeptical of that sort of result.
How can you think any of these problems can be solved by intelligence when none of them have been solved?
War has already been solved to some extent by intelligence (negotiations and diplomacy significantly decreased instances of war), hunger has been solved in large chunks of the world by intelligence, energy limits have been solved several times by intelligence, resource shortages ditto, intelligence has made a good first attempt at space travel (the moon is quite far away), and intelligence has made huge bounds towards solving the problem of loss of loved ones (vaccination, medical intervention, surgery, lifespans in the high 70s, etc).
Many wars are due to ideological priorities.
This is a constraint satisfaction problem (give as many ideologies as much of what they want as possible). Intelligence solves those problems.
I have my doubts about war, although I don’t think most wars really come down to conflicts of terminal values. I’d hope not, anyway.
However as for the rest, if they’re solvable at all, intelligence ought to be able to solve them. Solvable means there exists a way to solve them. Intelligence is to a large degree simply “finding ways to get what you want”.
Do you think energy limits really couldn’t be solved by simply producing through thought working designs for safe and efficient fusion power plants?
ETA: ah, perhaps replace “intelligence” with “sufficient intelligence”. We haven’t already solved all these problems already in part because we’re not really that smart. I think fusion power plants are theoretically possible, and at our current rate of progress we should reach that goal eventually, but if we were smarter we should obviously achieve it faster.
As various people have said, the original context was not making everybody more intelligent and thereby changing their inclinations, but rather creating an arbitrarily powerful superintelligence that makes their inclinations irrelevant. (The presumption here is typically that we know which current human inclinations such a superintelligence would endorse and which ones it would reject.)
But I’m interested in the context you imply (of humans becoming more intelligent).
My $0.02: I think almost all people who value war do so instrumentally. That is, I expect that most warmongers (whether ideologues or not) want to achieve some goal (spread their ideology, or amass personal power, or whatever) and they believe starting a war is the most effective way for them to do that. If they thought something else was more effective, they would do something else.
I also expect that intelligence is useful for identifying effective strategies to achieve a goal. (This comes pretty close to being true-by-definition.)
So I would only expect smarter ideologues (or anyone else) to remain warmongers if if starting a war really was the most effective way to achieve their goals. And if that’s true, everyone else gets to decide whether we’d rather have wars, or modify the system so that the ideologues have more effective options than starting wars (either by making other options more effective, or by making warmongering less effective, whichever approach is more efficient).
So, yes, if we choose to incentivize wars, then we’ll keep getting wars. It follows from this scenario that war is the least important problem we face, so we should be OK with that.
Conversely, if it turns out that war really is an important problem to solve, then I’d expect fewer wars.
Not all of them—most of them. War, hunger, energy limits, resource shortages, space travel, loss of loved ones—and so on. It probably won’t fix the speed of light limit, though.
What makes you reach this conclusion? How can you think any of these problems can be solved by intelligence when none of them have been solved? I’m particularly perplexed by the claim that war would be solved by higher intelligence. Many wars are due to ideological priorities. I don’t see how you can expect necessarily (or even with high probability) that ideologues will be less inclined to go to war if they are smarter.
Violence has been declining on (pretty much) every timescale: Steven Pinker: Myth of Violence. I think one could argue that this is because of greater collective intelligence of human race.
War won’t be solved by making everyone smarter, but it will be solved if a sufficiently powerful friendly AI takes over, as a singleton, because it would be powerful enough to stop everyone else from using force.
Yes, that makes sense, but in context I don’t think that’s what was meant since Tim is one of the people here is more skeptical of that sort of result.
Tim on “one big organism”:
http://alife.co.uk/essays/one_big_organism/
http://alife.co.uk/essays/self_directed_evolution/
http://alife.co.uk/essays/the_second_superintelligence/
Thanks for clarifying (here and in the other remark).
War has already been solved to some extent by intelligence (negotiations and diplomacy significantly decreased instances of war), hunger has been solved in large chunks of the world by intelligence, energy limits have been solved several times by intelligence, resource shortages ditto, intelligence has made a good first attempt at space travel (the moon is quite far away), and intelligence has made huge bounds towards solving the problem of loss of loved ones (vaccination, medical intervention, surgery, lifespans in the high 70s, etc).
This is a constraint satisfaction problem (give as many ideologies as much of what they want as possible). Intelligence solves those problems.
I have my doubts about war, although I don’t think most wars really come down to conflicts of terminal values. I’d hope not, anyway.
However as for the rest, if they’re solvable at all, intelligence ought to be able to solve them. Solvable means there exists a way to solve them. Intelligence is to a large degree simply “finding ways to get what you want”.
Do you think energy limits really couldn’t be solved by simply producing through thought working designs for safe and efficient fusion power plants?
ETA: ah, perhaps replace “intelligence” with “sufficient intelligence”. We haven’t already solved all these problems already in part because we’re not really that smart. I think fusion power plants are theoretically possible, and at our current rate of progress we should reach that goal eventually, but if we were smarter we should obviously achieve it faster.
As various people have said, the original context was not making everybody more intelligent and thereby changing their inclinations, but rather creating an arbitrarily powerful superintelligence that makes their inclinations irrelevant. (The presumption here is typically that we know which current human inclinations such a superintelligence would endorse and which ones it would reject.)
But I’m interested in the context you imply (of humans becoming more intelligent).
My $0.02: I think almost all people who value war do so instrumentally. That is, I expect that most warmongers (whether ideologues or not) want to achieve some goal (spread their ideology, or amass personal power, or whatever) and they believe starting a war is the most effective way for them to do that. If they thought something else was more effective, they would do something else.
I also expect that intelligence is useful for identifying effective strategies to achieve a goal. (This comes pretty close to being true-by-definition.)
So I would only expect smarter ideologues (or anyone else) to remain warmongers if if starting a war really was the most effective way to achieve their goals. And if that’s true, everyone else gets to decide whether we’d rather have wars, or modify the system so that the ideologues have more effective options than starting wars (either by making other options more effective, or by making warmongering less effective, whichever approach is more efficient).
So, yes, if we choose to incentivize wars, then we’ll keep getting wars. It follows from this scenario that war is the least important problem we face, so we should be OK with that.
Conversely, if it turns out that war really is an important problem to solve, then I’d expect fewer wars.
I was about to reply—but jimrandomh said most of what I was going to say already—though he did so using that dreadful “singleton” terminology, spit.
I was also going to say that the internet should have got the 2010 Nobel peace prize.