If you decreased the intelligence of everyone to 100 IQ points or lower, I think overall quality of life would decrease but that it would also drastically decrease existential risks.
Edit: On second thought, now that I think about nuclear and biological weapons, I might want to take that back while pointing out that these large threats were predominantly created by quite intelligent, well-intentioned and rational people.
If you decreased the intelligence of everyone to 100 IQ points or lower, that would probably eliminate all hope for a permanent escape from existential risk. Risk in this scenario might be lower per time unit in the near future, but total risk over all time would approach 100%.
Consider a world without nuclear weapons. What would there be to prevent world war I ad infinitum? As a male of conscriptable age, I would consider such a scenario to be so bad as to be not much better than global thermonuclear war.
Why do you think it’s the nuclear weapons that keep the current peace, and not the memory of past wars, and more generally/recently cultural moral progress? This is related to your prediction in resource depletion scenario.
There’s little evidence for theory that threat of global thermonuclear war creates global peace.
Even during the world wars, percentage of people who died of violence seems vastly smaller than in typical hunter gatherer societies.
There were long periods of peace before, most notably 1815-1914 where military technology was essentially equivalent to that of World War I. Before that 18th century was relatively bloodless too.
One of the countries with massive nuclear weapons stockpiles suffered total collapse. This might happen again in the future, in near future most likely to Pakistan or North Korea, but in longer term to any country.
Countries having nuclear weapons engaged in plenty of conventional wars, mostly on smaller scale, and fought each other by proxy.
Also, on a more pragmatic and personal level, increasing average human intelligence increases the probability of immortality and other “surprisingly good” outcomes of humans or other intelligences optimizing our world, such as universal beauty, health, happiness and better quality of life. This needn’t be through superintelligence, it could just be through the intelligence/wealth production correlation.
I don’t see why this being an epistemic probe makes risk per near future time unit more relevant than total risk integrated over time.
The whole thing is kind of academic, because for any realistic policy there’d be specific groups who’d be made smarter than others, and risk effects depend on what those groups are.
If you decreased the intelligence of everyone to 100 IQ points or lower, I think overall quality of life would decrease but that it would also drastically decrease existential risks.
Edit: On second thought, now that I think about nuclear and biological weapons, I might want to take that back while pointing out that these large threats were predominantly created by quite intelligent, well-intentioned and rational people.
If you decreased the intelligence of everyone to 100 IQ points or lower, that would probably eliminate all hope for a permanent escape from existential risk. Risk in this scenario might be lower per time unit in the near future, but total risk over all time would approach 100%.
Consider a world without nuclear weapons. What would there be to prevent world war I ad infinitum? As a male of conscriptable age, I would consider such a scenario to be so bad as to be not much better than global thermonuclear war.
Why do you think it’s the nuclear weapons that keep the current peace, and not the memory of past wars, and more generally/recently cultural moral progress? This is related to your prediction in resource depletion scenario.
List of wars by death toll is very interesting.
There’s little evidence for theory that threat of global thermonuclear war creates global peace.
Even during the world wars, percentage of people who died of violence seems vastly smaller than in typical hunter gatherer societies.
There were long periods of peace before, most notably 1815-1914 where military technology was essentially equivalent to that of World War I. Before that 18th century was relatively bloodless too.
One of top ten most deadly wars happened just a few years ago. So even accepting the premise that thermonuclear threat prevents war, we face either wide proliferation, or it won’t really do much to stop wars.
One of the countries with massive nuclear weapons stockpiles suffered total collapse. This might happen again in the future, in near future most likely to Pakistan or North Korea, but in longer term to any country.
Countries having nuclear weapons engaged in plenty of conventional wars, mostly on smaller scale, and fought each other by proxy.
I had exactly the same thought.
Also, on a more pragmatic and personal level, increasing average human intelligence increases the probability of immortality and other “surprisingly good” outcomes of humans or other intelligences optimizing our world, such as universal beauty, health, happiness and better quality of life. This needn’t be through superintelligence, it could just be through the intelligence/wealth production correlation.
That’s a good point, but it would be more relevant if this were a policy proposal rather than an epistemic probe.
I don’t see why this being an epistemic probe makes risk per near future time unit more relevant than total risk integrated over time.
The whole thing is kind of academic, because for any realistic policy there’d be specific groups who’d be made smarter than others, and risk effects depend on what those groups are.