It’s not obvious that the best way to reduce existential risk is to actually work on the problem. Imagine if every farmer put down his plow and came to the university to study artificial intelligence research. Everyone would starve. It may well be that someone’s best contribution is to continue to write software to do billing for health insurance, because that helps keep society running, which causes increased wealth, which then funds and supports people who specialize in researching risks among other fields.
I suspect that actually, only a small percentage of people, even of people here, could usefully learn the political truths relevant to existential risk mitigation via the kind of discussion you are proposing. Very few people are in a position to cause political change. The marginal utility gain for the average person to learn the truth on a political matter is practically zero due to his lack of influence on the political process. The many arguments against voting apply to this question as well, of seeking political truth; and even more so, because it’s harder to ascertain political truths than to vote.
Most interest in politics is IMO similar to interest in sports or movies. It’s fun, and it offers an opportunity to show off a bit, gives something to talk and socialize about, helps people form communities and define their interests. But beyond these kinds of social goals, there is no true value.
Most of the belief that one is in a position where knowing political truths is important, is likely to be self-deception. We see ourselves as being potentially more important and influential than we are likely ever to become. This kind of bias has been widely documented in many fields.
To me, politics is not so much the mind-killer as the mind-seducer. It leads us to believe that our opinions matter, it makes us feel proud and important. But it’s all a lie. Politics is a waste of time and should be viewed simply as a form of entertainment. Now entertainment can be good, we all need a break from serious work and politics may be as valid as any other form of recreation, but we here should recognize that and not inflate its importance.
The set of people seriously working to reduce existential risks is very small (perhaps a few hundred, depending on who and how you count). This gives strong general reason to suppose that the marginal impact of an individual can be large, in cases where the individual aims to reduce existential risks directly and is strategic/sane/rational about how (and not in cases where the individual simply goes about their business as one of billions in the larger economy).
Many LW readers are capable of understanding that there are risks, thinking through the differential impact their donations would have on different kinds of risk mitigation, and donating money in a manner that would help. Fewer, but still many, are also capable of improving the quality of thought regarding existential risks in relevant communities (e.g., in the academic departments where they study or work, or on LW or other portions of the blogosphere). And while I agree with Hal’s point that most politics is used as entertainment, there is reason to suppose that improving the quality of discussion of a very-high-impact, under-researched, tiny-numbers-of-people-currently-involved topic like existential risks can improve both (a) the well-directedness of resources like mine that are already being put toward existential risks, and (b) the amount of such resources, in dollars and in brainpower.
increased wealth, which then funds and supports people who specialize in researching risks
Would increased average wealth help risk-fighters more than risk-creators? It’s not obvious to me either way. What does seem obvious is that from a utilitarian perspective society is hugely underinvesting in risk-fighting and everything else with permanent effects.
What does seem obvious is that from a utilitarian perspective society is hugely underinvesting in risk-fighting and everything else with permanent effects.
That’s not obvious to me, and even if it were I don’t take a utilitarian perspective.
If you think there is underinvestment in risk fighting you have to come up with arguments to persuade people that don’t rely on a utilitarian perspective since most people don’t take that perspective when making decisions. Or you can try and find ways of increasing investment that don’t rely on persuading large numbers of people.
That utilitarianism implies one should do things with permanent effects comes from the future being much bigger than the present, and the probability of affecting it being smaller but not nearly proportionally smaller.
Even granting that, it’s not obvious to me that society is underinvesting in risk fighting. Many of the suggestions for countering global warming for example imply reduced economic growth. It is not obvious to me that the risks of catastrophic global warming outweigh the expected losses from reduced growth from a utilitarian perspective. Any investment in risk fighting carries an opportunity cost in a foregone investment in some other area. The right choice from a utilitarian perspective depends on judgements of expected risk vs. the expected benefits of alternative courses of action. I think the best choices are far from obvious.
Wholly agree on global warming; the best reference I know of on extreme predictions is this. I’m thinking more of future technologies (the self-replicating and/or intelligent kind), but also of building up the general intellectual background and institutions to deal rationally with unknown unknowns.
The assumption being made here is that actions taken with the intent of reducing existential risk will actually have the effect of reducing it rather than increasing it. This assumption seems sadly unlikely to be correct.
“Actions taken with the intent to prevent event X make event X less likely” is going to be my default belief unless there’s some strong evidence to the contrary.
Or, more particularly: “Actions taken after carefully asking what the evidence implies about the most effective means of making X less likely, and then following out the means with best expected value, make event X less likely”.
mattnewport’s counterexamples are good, but they are examples of what happens when “intent to reduce X” is filtered through a political system that incentivizes the appearance that something will be done, that penalizes public acknowledgement of unpleasant truths, and that does not understand science. There is reason to suppose we can do better—at least, there’s reason to assign a high enough probability to “we may be able to do better” for it to be clearly worth the costs of investigating particular issue X’s.
There is reason to hope we can do better but a sobering lack of evidence that such hope is realistic. That’s not a reason not to try but it seems we can agree that mere intent is far from sufficient.
Even supposing that it is possible to devise a course of action that we have good reason to believe will be effective, there is still a huge gulf to cross when it comes to putting that into action given current political realities.
Even supposing that it is possible to devise a course of action that we have good reason to believe will be effective, there is still a huge gulf to cross when it comes to putting that into action given current political realities.
This depends partly on what sort of “course of action” is devised, and how many people are needed to put it into action. Francis Bacon’s successful spread of the scientific method, Louis Pasteur’s germ theory, whoever it was who convinced doctors to wash their hands between childbirths, the invention of the printing press, and the invention of modern fertilizers sufficient to keep larger parts of the world fed… provide historical precedents for the idea that small groups of good thinkers can sometimes have predictably positive impacts on the world without extensively and directly engaging global politics/elections/etc.
There is reason to hope we can do better but a sobering lack of evidence that such hope is realistic.
[I’d edited my previous comment just before mattnewport wrote this; I’d previously left my comment at “There is reason to suppose we can do better”, then had decided that that was overstating the evidence and added the “—at least...”. mattnewport probably wrote this in response to the previous version; my apologies.]
As to evaluating the evidence: does anyone know where we can find data as to whether relatively well-researched charities do tend to improve poverty or other problems to which they turn their attention?
Alcohol prohibition, drug prohibition, the criminalization of prostitution, banking regulations designed to reduce bank failures due to excessive risk taking, bailing out automakers to prevent bankruptcy, policies designed to prevent terrorist attacks such as torturing prisoners… All are examples of actions taken with the intent to prevent X which have quite a lot of evidence to suggest that they did not make X less likely.
To what Matt said, I will add: actions taken with the intent of preventing harmful event X, often have no effect on X but greatly contribute to equally harmful event Y.
Most interest in politics is IMO similar to interest in sports or movies. It’s fun, and it offers an opportunity to show off a bit, gives something to talk and socialize about, helps people form communities and define their interests. But beyond these kinds of social goals, there is no true value.
I’m not totally sure what you mean by this. With that said, it does matter very much how the government distributes its resources. While the government is admittedly inefficient, that doesn’t mean that it can’t be improved. Since politics determines how those resources are distributed, wouldn’t becoming involved in politics be a valid and important way to gain your favored causes-i.e. existential risk mitigation- support? Declaring one method of gaining support to be automatically invalid, no matter the circumstances, won’t help you.
Are there currently enough soldiers? What is the best way to recruit them? Existential risks is a high-payoff and generally-misunderstood issue. It looks like there is no strong community of professionals to work on it at the moment. In any case, there are existing organizations, and their merits and professional opinion should be considered before anyone commits to anything.
It’s not obvious that the best way to reduce existential risk is to actually work on the problem. Imagine if every farmer put down his plow and came to the university to study artificial intelligence research. Everyone would starve. It may well be that someone’s best contribution is to continue to write software to do billing for health insurance, because that helps keep society running, which causes increased wealth, which then funds and supports people who specialize in researching risks among other fields.
I suspect that actually, only a small percentage of people, even of people here, could usefully learn the political truths relevant to existential risk mitigation via the kind of discussion you are proposing. Very few people are in a position to cause political change. The marginal utility gain for the average person to learn the truth on a political matter is practically zero due to his lack of influence on the political process. The many arguments against voting apply to this question as well, of seeking political truth; and even more so, because it’s harder to ascertain political truths than to vote.
Most interest in politics is IMO similar to interest in sports or movies. It’s fun, and it offers an opportunity to show off a bit, gives something to talk and socialize about, helps people form communities and define their interests. But beyond these kinds of social goals, there is no true value.
Most of the belief that one is in a position where knowing political truths is important, is likely to be self-deception. We see ourselves as being potentially more important and influential than we are likely ever to become. This kind of bias has been widely documented in many fields.
To me, politics is not so much the mind-killer as the mind-seducer. It leads us to believe that our opinions matter, it makes us feel proud and important. But it’s all a lie. Politics is a waste of time and should be viewed simply as a form of entertainment. Now entertainment can be good, we all need a break from serious work and politics may be as valid as any other form of recreation, but we here should recognize that and not inflate its importance.
The set of people seriously working to reduce existential risks is very small (perhaps a few hundred, depending on who and how you count). This gives strong general reason to suppose that the marginal impact of an individual can be large, in cases where the individual aims to reduce existential risks directly and is strategic/sane/rational about how (and not in cases where the individual simply goes about their business as one of billions in the larger economy).
Many LW readers are capable of understanding that there are risks, thinking through the differential impact their donations would have on different kinds of risk mitigation, and donating money in a manner that would help. Fewer, but still many, are also capable of improving the quality of thought regarding existential risks in relevant communities (e.g., in the academic departments where they study or work, or on LW or other portions of the blogosphere). And while I agree with Hal’s point that most politics is used as entertainment, there is reason to suppose that improving the quality of discussion of a very-high-impact, under-researched, tiny-numbers-of-people-currently-involved topic like existential risks can improve both (a) the well-directedness of resources like mine that are already being put toward existential risks, and (b) the amount of such resources, in dollars and in brainpower.
Would increased average wealth help risk-fighters more than risk-creators? It’s not obvious to me either way. What does seem obvious is that from a utilitarian perspective society is hugely underinvesting in risk-fighting and everything else with permanent effects.
I believe Eliezer has made a strong case that Moore’s Law, for example, mostly benefits the risk-producers
″...Every 18 months, the minimum IQ to destroy the world drops by one point.”
“Every 18 months, the minimum IQ to destroy the world drops by one point.”
Every 18 months, the minimum IQ to destroy the world drops by one point.
That’s not obvious to me, and even if it were I don’t take a utilitarian perspective.
If you think there is underinvestment in risk fighting you have to come up with arguments to persuade people that don’t rely on a utilitarian perspective since most people don’t take that perspective when making decisions. Or you can try and find ways of increasing investment that don’t rely on persuading large numbers of people.
That utilitarianism implies one should do things with permanent effects comes from the future being much bigger than the present, and the probability of affecting it being smaller but not nearly proportionally smaller.
I agree with your second paragraph.
Even granting that, it’s not obvious to me that society is underinvesting in risk fighting. Many of the suggestions for countering global warming for example imply reduced economic growth. It is not obvious to me that the risks of catastrophic global warming outweigh the expected losses from reduced growth from a utilitarian perspective. Any investment in risk fighting carries an opportunity cost in a foregone investment in some other area. The right choice from a utilitarian perspective depends on judgements of expected risk vs. the expected benefits of alternative courses of action. I think the best choices are far from obvious.
Wholly agree on global warming; the best reference I know of on extreme predictions is this. I’m thinking more of future technologies (the self-replicating and/or intelligent kind), but also of building up the general intellectual background and institutions to deal rationally with unknown unknowns.
The assumption being made here is that actions taken with the intent of reducing existential risk will actually have the effect of reducing it rather than increasing it. This assumption seems sadly unlikely to be correct.
“Actions taken with the intent to prevent event X make event X less likely” is going to be my default belief unless there’s some strong evidence to the contrary.
Or, more particularly: “Actions taken after carefully asking what the evidence implies about the most effective means of making X less likely, and then following out the means with best expected value, make event X less likely”.
mattnewport’s counterexamples are good, but they are examples of what happens when “intent to reduce X” is filtered through a political system that incentivizes the appearance that something will be done, that penalizes public acknowledgement of unpleasant truths, and that does not understand science. There is reason to suppose we can do better—at least, there’s reason to assign a high enough probability to “we may be able to do better” for it to be clearly worth the costs of investigating particular issue X’s.
There is reason to hope we can do better but a sobering lack of evidence that such hope is realistic. That’s not a reason not to try but it seems we can agree that mere intent is far from sufficient.
Even supposing that it is possible to devise a course of action that we have good reason to believe will be effective, there is still a huge gulf to cross when it comes to putting that into action given current political realities.
This depends partly on what sort of “course of action” is devised, and how many people are needed to put it into action. Francis Bacon’s successful spread of the scientific method, Louis Pasteur’s germ theory, whoever it was who convinced doctors to wash their hands between childbirths, the invention of the printing press, and the invention of modern fertilizers sufficient to keep larger parts of the world fed… provide historical precedents for the idea that small groups of good thinkers can sometimes have predictably positive impacts on the world without extensively and directly engaging global politics/elections/etc.
[I’d edited my previous comment just before mattnewport wrote this; I’d previously left my comment at “There is reason to suppose we can do better”, then had decided that that was overstating the evidence and added the “—at least...”. mattnewport probably wrote this in response to the previous version; my apologies.]
As to evaluating the evidence: does anyone know where we can find data as to whether relatively well-researched charities do tend to improve poverty or other problems to which they turn their attention?
givewell.net
Alcohol prohibition, drug prohibition, the criminalization of prostitution, banking regulations designed to reduce bank failures due to excessive risk taking, bailing out automakers to prevent bankruptcy, policies designed to prevent terrorist attacks such as torturing prisoners… All are examples of actions taken with the intent to prevent X which have quite a lot of evidence to suggest that they did not make X less likely.
To what Matt said, I will add: actions taken with the intent of preventing harmful event X, often have no effect on X but greatly contribute to equally harmful event Y.
I’m not totally sure what you mean by this. With that said, it does matter very much how the government distributes its resources. While the government is admittedly inefficient, that doesn’t mean that it can’t be improved. Since politics determines how those resources are distributed, wouldn’t becoming involved in politics be a valid and important way to gain your favored causes-i.e. existential risk mitigation- support? Declaring one method of gaining support to be automatically invalid, no matter the circumstances, won’t help you.
The arguments against voting are mostly puerile, and so is this one against political judgment. See here for an alternative view.
Are there currently enough soldiers? What is the best way to recruit them? Existential risks is a high-payoff and generally-misunderstood issue. It looks like there is no strong community of professionals to work on it at the moment. In any case, there are existing organizations, and their merits and professional opinion should be considered before anyone commits to anything.