Funding SIAI is optimal only if you think that the pursuit of Friendly AI is by far the most important component of existential risk reduction
This seems to assume that existential risk reduction is the only thing people care about. I doubt I am the only person who wants more from the universe than eliminating risk of humans going extinct. I would trade increased chance of extinction for a commensurate change in the probable outcomes if we survive. Frankly I would consider it insane not to be willing make such a trade.
I disagree. If we can avoid being wiped out, or otherwise have our potential permanently limited, our eventual outcome is very likely to be good beyond our potential to imagine. I really think the “maxipok” term of our efforts toward the greater good can’t fail to absolutely dominate all other terms.
I disagree. If we can avoid being wiped out, or otherwise have our potential permanently limited, our eventual outcome is very likely to be good beyond our potential to imagine.
That sounds very optimistic. I just don’t see any reason for us to expect the future should be so bright if human genetic, cultural and technological go on under the usual influence of competition. Unless we do something rather drastic (eg. FAI or some other kind of positive singleton) in the short term then it seems inevitable that we end up in Malthusian hell.
Most of what I consider ‘good’ is, for the purposes of competition, a complete waste of time.
It follows from—“This seems to assume that existential risk reduction is the only thing people care about.”—and—“I disagree.”—People do care about other things. They mostly care about other things.
I think I also buy the evolved-intelligence-should-be-myopic argument, even though we have only one data point, and don’t need the evolutionary argument to lend support to what direct observation already shows in our case.
So, I can’t see why this is downvoted except that it’s somewhat of a tangent.
Even if the “paranoid fantasies” have consderable substance, would still usually be better (for your genes) to concentrate on producing offspring. Averting disaster is a “tragedy of the commons” situation. Free riding—and letting someone else do that—may well reap the benefits without paying the costs.
It seems pretty clear that very few care much about existential risk reduction.
That makes perfect sense from an evolutionary perspective. Organisms can be expected to concentrate on producing offspring—not indulging paranoid fantasies about their whole species being wiped out!
The bigger puzzle is why anyone seems to care about it at all. The most obvious answer is signalling. For example, if you care for the fate of everyone in the whole world, that SHOWS YOU CARE—a lot! Also, the END OF THE WORLD acts as a superstimulus to people’s warning systems. So—they rush and warn their friends—and that gives them warm fuzzy feelings. The get credit for raising the alarm about the TERRIBLE DANGER—and so on.
Disaster movies—like 2012 - trade on people’s fears in this area—stimulating and fuelling their paranoia further—by providing them with fake memories of it happening. One can’t help wondering whether FEAR OF THE END is a healthy phenomenon—overall—and if not, whether it realy sensible to stimulate those fears.
Does the average human—on being convinced the world is about to end—behave better—or worse? Do they try and hold back the end—or do they rape and pillage? If their behaviour is likely to be worse then responsible adults should think very carefully before promoting the idea that THE END IS NIGH on the basis of sketchy evidence.
Does the average human—on being convinced the world is about to end—behave better—or worse? Do they try and hold back the end—or do they rape and pillage?
Given the current level of technology the end IS nigh, the world WILL end, for every person individually, in less than a century. On average it’ll happen around the 77-year mark for males in the US. This has been the case through all of history (for most of it at a much younger age) and yet people generally do not rape and pillage. Nor are they more likely to do so as the end of their world approaches.
Thus, I do not think there is much reason for concern.
People care (to varying degrees) about how the world will be after they die. People even care about their own post-mortem reputations. I think it’s reasonable to ask whether people will behave differently if they anticipate that the world will die along with them.
I think it’s bad form to accuse other people of being insincere without clearly defending your remarks. By claiming that the only reason anyone cares about existential risk is signalling, Tim is saying that a lot of people who appear very serious about X-risk reduction are either lying or fooling themselves. I know many altruists who have acted in a way consistent with being genuinely concerned about the future, and I don’t see why I should take Tim’s word over theirs. It certainly isn’t the “most obvious answer.”
I also don’t like this claim that people are likely to behave worse when they think they’re in impending danger, because again, I don’t agree that it’s intuitive, and no evidence is provided. It also isn’t sufficient; maybe some risks are important enough that they ought to be addressed even if addressing them has bad cultural side effects. I know that the SIAI people, at least, would definitely put uFAI in this category without a second thought.
Hm, I didn’t get that out of timtyler’s post (just voted up). He didn’t seem to be saying, “Each and every person interested in this topic is doing it to signal status”, but rather, “Hey, our minds aren’t wired up to care about this stuff unless maybe it signals”—which doesn’t seem all that objectionable.
DNDV (did not down vote). Sure signalling has a lot to do with it, the type of signalling he suggests doesn’t ring true with what I have see of most peoples behaviour. We do not seem to be great proselytisers most of the time.
The ancient circuits that x-risk triggers in me are those of feeling important, of being a player in the tribes future with the benefits that that entails. Of course I won’t get the women if I eventually help save humanity, but my circuits that trigger on “important issues” don’t seem to know that. In short by trying to deal with important issues I am trying to signal a raised status.
I didn’t say that “the only reason anyone cares about existential risk is signalling”. I was mostly trying to offer an explanation for the observed fact that relatively few give the matter much thought.
I was raising the issue of whether typical humans behave better or worse—if they become convinced that THE END IS NIGH. I don’t know the answer to that. I don’t know of much evidence on the topic. Is there any evidence that proclaiming the END OF THE WORLD is at hand has a net positive effect? If not, then why are some so keen to do it—if not for signalling and marketing purposes?
I thought people here were compatibilists. Saying that someone does something of their own free will is compatible with saying that their actions are determined. Similarly, saying that they are genuinely concerned is compatible with saying that their expressions of concern arise (causally) from “signaling”.
“Citation Index suggests that virtually nothing has been written about the cost effectiveness of reducing human extinction risks,” and Nick Bostrom and Anders Sandberg noted, in a personal communication, that there are orders of magnitude more papers on coleoptera—the study of beetles—than “human extinction.” Anyone can confirm this for themselves with a Google Scholar search: coleoptera gets 245,000 hits, and “human extinction” gets fewer than 1,200.”
I am not saying that nobody cares. The issue was raised because you said:
This seems to assume that existential risk reduction is the only thing
people care about. I doubt I am the only person who wants more from
the universe than eliminating risk of humans going extinct.
...and someone disagreed!!!
People do care about other things. They mostly care about other things. And the reason for that is pretty obvious—if you think about it.
The common complaint here is that the signalled motive is usually wonderful and altruistic—in this case SAVING THE WORLD for everyone. Whereas the signalling motive is usually selfish (SHOWING YOU CARE, being a hero, selflessly warning others of the danger—etc).
So—if the signalling theory is accepted—people are less likely to believe there is altruism underlying the signal any more (because there isn’t any). It will seem fake—the mere appearance of altruism.
The signalling theory is unlikely to appeal to those sending the signals. It wakes up their audience, and reduces the impact of the signal.
This seems to assume that existential risk reduction is the only thing people care about. I doubt I am the only person who wants more from the universe than eliminating risk of humans going extinct. I would trade increased chance of extinction for a commensurate change in the probable outcomes if we survive. Frankly I would consider it insane not to be willing make such a trade.
I meant “optimal within the category of X-risk reduction”, and I see your point.
Upvoted.
We’ve had agreements and disagreements here. This is one of the agreements.
I disagree. If we can avoid being wiped out, or otherwise have our potential permanently limited, our eventual outcome is very likely to be good beyond our potential to imagine. I really think the “maxipok” term of our efforts toward the greater good can’t fail to absolutely dominate all other terms.
That sounds very optimistic. I just don’t see any reason for us to expect the future should be so bright if human genetic, cultural and technological go on under the usual influence of competition. Unless we do something rather drastic (eg. FAI or some other kind of positive singleton) in the short term then it seems inevitable that we end up in Malthusian hell.
Most of what I consider ‘good’ is, for the purposes of competition, a complete waste of time.
Lack of interest in existential risk reduction makes perfect sense from an evolutionary perspective. As I have previously explained:
“Organisms can be expected to concentrate on producing offspring—not indulging paranoid fantasies about their whole species being wiped out!”
Most people are far more concerned about other things—for perfectly sensible and comprehensible reasons.
This is a bizarre digression from the parent comment. You’re already having this exact conversation elsewhere in the thread!
It follows from—“This seems to assume that existential risk reduction is the only thing people care about.”—and—“I disagree.”—People do care about other things. They mostly care about other things.
Your last sentence seems true.
I think I also buy the evolved-intelligence-should-be-myopic argument, even though we have only one data point, and don’t need the evolutionary argument to lend support to what direct observation already shows in our case.
So, I can’t see why this is downvoted except that it’s somewhat of a tangent.
Well, I wasn’t really claiming that “evolved-intelligence-should-be-myopic”.
Evolved-intelligence is what we have, and it can predict the future—at least a little:
http://alife.co.uk/essays/evolution_sees/
Even if the “paranoid fantasies” have consderable substance, would still usually be better (for your genes) to concentrate on producing offspring. Averting disaster is a “tragedy of the commons” situation. Free riding—and letting someone else do that—may well reap the benefits without paying the costs.
It seems pretty clear that very few care much about existential risk reduction.
That makes perfect sense from an evolutionary perspective. Organisms can be expected to concentrate on producing offspring—not indulging paranoid fantasies about their whole species being wiped out!
The bigger puzzle is why anyone seems to care about it at all. The most obvious answer is signalling. For example, if you care for the fate of everyone in the whole world, that SHOWS YOU CARE—a lot! Also, the END OF THE WORLD acts as a superstimulus to people’s warning systems. So—they rush and warn their friends—and that gives them warm fuzzy feelings. The get credit for raising the alarm about the TERRIBLE DANGER—and so on.
Disaster movies—like 2012 - trade on people’s fears in this area—stimulating and fuelling their paranoia further—by providing them with fake memories of it happening. One can’t help wondering whether FEAR OF THE END is a healthy phenomenon—overall—and if not, whether it realy sensible to stimulate those fears.
Does the average human—on being convinced the world is about to end—behave better—or worse? Do they try and hold back the end—or do they rape and pillage? If their behaviour is likely to be worse then responsible adults should think very carefully before promoting the idea that THE END IS NIGH on the basis of sketchy evidence.
Given the current level of technology the end IS nigh, the world WILL end, for every person individually, in less than a century. On average it’ll happen around the 77-year mark for males in the US. This has been the case through all of history (for most of it at a much younger age) and yet people generally do not rape and pillage. Nor are they more likely to do so as the end of their world approaches.
Thus, I do not think there is much reason for concern.
People care (to varying degrees) about how the world will be after they die. People even care about their own post-mortem reputations. I think it’s reasonable to ask whether people will behave differently if they anticipate that the world will die along with them.
The elderly are not known for their looting and rabble-rousing tendencies—partly due to frailty and sickness.
Those who believe the world is going to end do sometimes cause problems—e.g. see The People’s Temple and The Movement for the Restoration of the Ten Commandments of God.
This seems correct. Do people object on style? Is it a repost? Off topic?
I think it’s bad form to accuse other people of being insincere without clearly defending your remarks. By claiming that the only reason anyone cares about existential risk is signalling, Tim is saying that a lot of people who appear very serious about X-risk reduction are either lying or fooling themselves. I know many altruists who have acted in a way consistent with being genuinely concerned about the future, and I don’t see why I should take Tim’s word over theirs. It certainly isn’t the “most obvious answer.”
I also don’t like this claim that people are likely to behave worse when they think they’re in impending danger, because again, I don’t agree that it’s intuitive, and no evidence is provided. It also isn’t sufficient; maybe some risks are important enough that they ought to be addressed even if addressing them has bad cultural side effects. I know that the SIAI people, at least, would definitely put uFAI in this category without a second thought.
Hm, I didn’t get that out of timtyler’s post (just voted up). He didn’t seem to be saying, “Each and every person interested in this topic is doing it to signal status”, but rather, “Hey, our minds aren’t wired up to care about this stuff unless maybe it signals”—which doesn’t seem all that objectionable.
DNDV (did not down vote). Sure signalling has a lot to do with it, the type of signalling he suggests doesn’t ring true with what I have see of most peoples behaviour. We do not seem to be great proselytisers most of the time.
The ancient circuits that x-risk triggers in me are those of feeling important, of being a player in the tribes future with the benefits that that entails. Of course I won’t get the women if I eventually help save humanity, but my circuits that trigger on “important issues” don’t seem to know that. In short by trying to deal with important issues I am trying to signal a raised status.
Ok, so people don’t like the implication of either the evo-psych argument, or the signaling argument. They both seem plausible, if speculative.
I didn’t say that “the only reason anyone cares about existential risk is signalling”. I was mostly trying to offer an explanation for the observed fact that relatively few give the matter much thought.
I was raising the issue of whether typical humans behave better or worse—if they become convinced that THE END IS NIGH. I don’t know the answer to that. I don’t know of much evidence on the topic. Is there any evidence that proclaiming the END OF THE WORLD is at hand has a net positive effect? If not, then why are some so keen to do it—if not for signalling and marketing purposes?
I thought people here were compatibilists. Saying that someone does something of their own free will is compatible with saying that their actions are determined. Similarly, saying that they are genuinely concerned is compatible with saying that their expressions of concern arise (causally) from “signaling”.
That’s what Tim could have said. His post may have got a better reception if he left off:
I mean, I most certainly do care and the reasons are obvious. p(wedrifid survives | no human survives) = 0
What I mean is things like:
“Citation Index suggests that virtually nothing has been written about the cost effectiveness of reducing human extinction risks,” and Nick Bostrom and Anders Sandberg noted, in a personal communication, that there are orders of magnitude more papers on coleoptera—the study of beetles—than “human extinction.” Anyone can confirm this for themselves with a Google Scholar search: coleoptera gets 245,000 hits, and “human extinction” gets fewer than 1,200.”
http://www.good.is/post/our-delicate-future-handle-with-care/
I am not saying that nobody cares. The issue was raised because you said:
...and someone disagreed!!!
People do care about other things. They mostly care about other things. And the reason for that is pretty obvious—if you think about it.
Wow… this was my tangent? Then “WOO! Whatever point I was initially making!”, or something.
The common complaint here is that the signalled motive is usually wonderful and altruistic—in this case SAVING THE WORLD for everyone. Whereas the signalling motive is usually selfish (SHOWING YOU CARE, being a hero, selflessly warning others of the danger—etc).
So—if the signalling theory is accepted—people are less likely to believe there is altruism underlying the signal any more (because there isn’t any). It will seem fake—the mere appearance of altruism.
The signalling theory is unlikely to appeal to those sending the signals. It wakes up their audience, and reduces the impact of the signal.