increased wealth, which then funds and supports people who specialize in researching risks
Would increased average wealth help risk-fighters more than risk-creators? It’s not obvious to me either way. What does seem obvious is that from a utilitarian perspective society is hugely underinvesting in risk-fighting and everything else with permanent effects.
What does seem obvious is that from a utilitarian perspective society is hugely underinvesting in risk-fighting and everything else with permanent effects.
That’s not obvious to me, and even if it were I don’t take a utilitarian perspective.
If you think there is underinvestment in risk fighting you have to come up with arguments to persuade people that don’t rely on a utilitarian perspective since most people don’t take that perspective when making decisions. Or you can try and find ways of increasing investment that don’t rely on persuading large numbers of people.
That utilitarianism implies one should do things with permanent effects comes from the future being much bigger than the present, and the probability of affecting it being smaller but not nearly proportionally smaller.
Even granting that, it’s not obvious to me that society is underinvesting in risk fighting. Many of the suggestions for countering global warming for example imply reduced economic growth. It is not obvious to me that the risks of catastrophic global warming outweigh the expected losses from reduced growth from a utilitarian perspective. Any investment in risk fighting carries an opportunity cost in a foregone investment in some other area. The right choice from a utilitarian perspective depends on judgements of expected risk vs. the expected benefits of alternative courses of action. I think the best choices are far from obvious.
Wholly agree on global warming; the best reference I know of on extreme predictions is this. I’m thinking more of future technologies (the self-replicating and/or intelligent kind), but also of building up the general intellectual background and institutions to deal rationally with unknown unknowns.
The assumption being made here is that actions taken with the intent of reducing existential risk will actually have the effect of reducing it rather than increasing it. This assumption seems sadly unlikely to be correct.
“Actions taken with the intent to prevent event X make event X less likely” is going to be my default belief unless there’s some strong evidence to the contrary.
Or, more particularly: “Actions taken after carefully asking what the evidence implies about the most effective means of making X less likely, and then following out the means with best expected value, make event X less likely”.
mattnewport’s counterexamples are good, but they are examples of what happens when “intent to reduce X” is filtered through a political system that incentivizes the appearance that something will be done, that penalizes public acknowledgement of unpleasant truths, and that does not understand science. There is reason to suppose we can do better—at least, there’s reason to assign a high enough probability to “we may be able to do better” for it to be clearly worth the costs of investigating particular issue X’s.
There is reason to hope we can do better but a sobering lack of evidence that such hope is realistic. That’s not a reason not to try but it seems we can agree that mere intent is far from sufficient.
Even supposing that it is possible to devise a course of action that we have good reason to believe will be effective, there is still a huge gulf to cross when it comes to putting that into action given current political realities.
Even supposing that it is possible to devise a course of action that we have good reason to believe will be effective, there is still a huge gulf to cross when it comes to putting that into action given current political realities.
This depends partly on what sort of “course of action” is devised, and how many people are needed to put it into action. Francis Bacon’s successful spread of the scientific method, Louis Pasteur’s germ theory, whoever it was who convinced doctors to wash their hands between childbirths, the invention of the printing press, and the invention of modern fertilizers sufficient to keep larger parts of the world fed… provide historical precedents for the idea that small groups of good thinkers can sometimes have predictably positive impacts on the world without extensively and directly engaging global politics/elections/etc.
There is reason to hope we can do better but a sobering lack of evidence that such hope is realistic.
[I’d edited my previous comment just before mattnewport wrote this; I’d previously left my comment at “There is reason to suppose we can do better”, then had decided that that was overstating the evidence and added the “—at least...”. mattnewport probably wrote this in response to the previous version; my apologies.]
As to evaluating the evidence: does anyone know where we can find data as to whether relatively well-researched charities do tend to improve poverty or other problems to which they turn their attention?
Alcohol prohibition, drug prohibition, the criminalization of prostitution, banking regulations designed to reduce bank failures due to excessive risk taking, bailing out automakers to prevent bankruptcy, policies designed to prevent terrorist attacks such as torturing prisoners… All are examples of actions taken with the intent to prevent X which have quite a lot of evidence to suggest that they did not make X less likely.
To what Matt said, I will add: actions taken with the intent of preventing harmful event X, often have no effect on X but greatly contribute to equally harmful event Y.
Would increased average wealth help risk-fighters more than risk-creators? It’s not obvious to me either way. What does seem obvious is that from a utilitarian perspective society is hugely underinvesting in risk-fighting and everything else with permanent effects.
I believe Eliezer has made a strong case that Moore’s Law, for example, mostly benefits the risk-producers
″...Every 18 months, the minimum IQ to destroy the world drops by one point.”
“Every 18 months, the minimum IQ to destroy the world drops by one point.”
Every 18 months, the minimum IQ to destroy the world drops by one point.
That’s not obvious to me, and even if it were I don’t take a utilitarian perspective.
If you think there is underinvestment in risk fighting you have to come up with arguments to persuade people that don’t rely on a utilitarian perspective since most people don’t take that perspective when making decisions. Or you can try and find ways of increasing investment that don’t rely on persuading large numbers of people.
That utilitarianism implies one should do things with permanent effects comes from the future being much bigger than the present, and the probability of affecting it being smaller but not nearly proportionally smaller.
I agree with your second paragraph.
Even granting that, it’s not obvious to me that society is underinvesting in risk fighting. Many of the suggestions for countering global warming for example imply reduced economic growth. It is not obvious to me that the risks of catastrophic global warming outweigh the expected losses from reduced growth from a utilitarian perspective. Any investment in risk fighting carries an opportunity cost in a foregone investment in some other area. The right choice from a utilitarian perspective depends on judgements of expected risk vs. the expected benefits of alternative courses of action. I think the best choices are far from obvious.
Wholly agree on global warming; the best reference I know of on extreme predictions is this. I’m thinking more of future technologies (the self-replicating and/or intelligent kind), but also of building up the general intellectual background and institutions to deal rationally with unknown unknowns.
The assumption being made here is that actions taken with the intent of reducing existential risk will actually have the effect of reducing it rather than increasing it. This assumption seems sadly unlikely to be correct.
“Actions taken with the intent to prevent event X make event X less likely” is going to be my default belief unless there’s some strong evidence to the contrary.
Or, more particularly: “Actions taken after carefully asking what the evidence implies about the most effective means of making X less likely, and then following out the means with best expected value, make event X less likely”.
mattnewport’s counterexamples are good, but they are examples of what happens when “intent to reduce X” is filtered through a political system that incentivizes the appearance that something will be done, that penalizes public acknowledgement of unpleasant truths, and that does not understand science. There is reason to suppose we can do better—at least, there’s reason to assign a high enough probability to “we may be able to do better” for it to be clearly worth the costs of investigating particular issue X’s.
There is reason to hope we can do better but a sobering lack of evidence that such hope is realistic. That’s not a reason not to try but it seems we can agree that mere intent is far from sufficient.
Even supposing that it is possible to devise a course of action that we have good reason to believe will be effective, there is still a huge gulf to cross when it comes to putting that into action given current political realities.
This depends partly on what sort of “course of action” is devised, and how many people are needed to put it into action. Francis Bacon’s successful spread of the scientific method, Louis Pasteur’s germ theory, whoever it was who convinced doctors to wash their hands between childbirths, the invention of the printing press, and the invention of modern fertilizers sufficient to keep larger parts of the world fed… provide historical precedents for the idea that small groups of good thinkers can sometimes have predictably positive impacts on the world without extensively and directly engaging global politics/elections/etc.
[I’d edited my previous comment just before mattnewport wrote this; I’d previously left my comment at “There is reason to suppose we can do better”, then had decided that that was overstating the evidence and added the “—at least...”. mattnewport probably wrote this in response to the previous version; my apologies.]
As to evaluating the evidence: does anyone know where we can find data as to whether relatively well-researched charities do tend to improve poverty or other problems to which they turn their attention?
givewell.net
Alcohol prohibition, drug prohibition, the criminalization of prostitution, banking regulations designed to reduce bank failures due to excessive risk taking, bailing out automakers to prevent bankruptcy, policies designed to prevent terrorist attacks such as torturing prisoners… All are examples of actions taken with the intent to prevent X which have quite a lot of evidence to suggest that they did not make X less likely.
To what Matt said, I will add: actions taken with the intent of preventing harmful event X, often have no effect on X but greatly contribute to equally harmful event Y.