I’m particularly perplexed by the claim that war would be solved by higher intelligence. Many wars are due to ideological priorities. I don’t see how you can expect necessarily (or even with high probability) that ideologues will be less inclined to go to war if they are smarter.
War won’t be solved by making everyone smarter, but it will be solved if a sufficiently powerful friendly AI takes over, as a singleton, because it would be powerful enough to stop everyone else from using force.
Yes, that makes sense, but in context I don’t think that’s what was meant since Tim is one of the people here is more skeptical of that sort of result.
War won’t be solved by making everyone smarter, but it will be solved if a sufficiently powerful friendly AI takes over, as a singleton, because it would be powerful enough to stop everyone else from using force.
Yes, that makes sense, but in context I don’t think that’s what was meant since Tim is one of the people here is more skeptical of that sort of result.
Tim on “one big organism”:
http://alife.co.uk/essays/one_big_organism/
http://alife.co.uk/essays/self_directed_evolution/
http://alife.co.uk/essays/the_second_superintelligence/
Thanks for clarifying (here and in the other remark).