I think it’s bad form to accuse other people of being insincere without clearly defending your remarks. By claiming that the only reason anyone cares about existential risk is signalling, Tim is saying that a lot of people who appear very serious about X-risk reduction are either lying or fooling themselves. I know many altruists who have acted in a way consistent with being genuinely concerned about the future, and I don’t see why I should take Tim’s word over theirs. It certainly isn’t the “most obvious answer.”
I also don’t like this claim that people are likely to behave worse when they think they’re in impending danger, because again, I don’t agree that it’s intuitive, and no evidence is provided. It also isn’t sufficient; maybe some risks are important enough that they ought to be addressed even if addressing them has bad cultural side effects. I know that the SIAI people, at least, would definitely put uFAI in this category without a second thought.
Hm, I didn’t get that out of timtyler’s post (just voted up). He didn’t seem to be saying, “Each and every person interested in this topic is doing it to signal status”, but rather, “Hey, our minds aren’t wired up to care about this stuff unless maybe it signals”—which doesn’t seem all that objectionable.
DNDV (did not down vote). Sure signalling has a lot to do with it, the type of signalling he suggests doesn’t ring true with what I have see of most peoples behaviour. We do not seem to be great proselytisers most of the time.
The ancient circuits that x-risk triggers in me are those of feeling important, of being a player in the tribes future with the benefits that that entails. Of course I won’t get the women if I eventually help save humanity, but my circuits that trigger on “important issues” don’t seem to know that. In short by trying to deal with important issues I am trying to signal a raised status.
I didn’t say that “the only reason anyone cares about existential risk is signalling”. I was mostly trying to offer an explanation for the observed fact that relatively few give the matter much thought.
I was raising the issue of whether typical humans behave better or worse—if they become convinced that THE END IS NIGH. I don’t know the answer to that. I don’t know of much evidence on the topic. Is there any evidence that proclaiming the END OF THE WORLD is at hand has a net positive effect? If not, then why are some so keen to do it—if not for signalling and marketing purposes?
I thought people here were compatibilists. Saying that someone does something of their own free will is compatible with saying that their actions are determined. Similarly, saying that they are genuinely concerned is compatible with saying that their expressions of concern arise (causally) from “signaling”.
“Citation Index suggests that virtually nothing has been written about the cost effectiveness of reducing human extinction risks,” and Nick Bostrom and Anders Sandberg noted, in a personal communication, that there are orders of magnitude more papers on coleoptera—the study of beetles—than “human extinction.” Anyone can confirm this for themselves with a Google Scholar search: coleoptera gets 245,000 hits, and “human extinction” gets fewer than 1,200.”
I am not saying that nobody cares. The issue was raised because you said:
This seems to assume that existential risk reduction is the only thing
people care about. I doubt I am the only person who wants more from
the universe than eliminating risk of humans going extinct.
...and someone disagreed!!!
People do care about other things. They mostly care about other things. And the reason for that is pretty obvious—if you think about it.
The common complaint here is that the signalled motive is usually wonderful and altruistic—in this case SAVING THE WORLD for everyone. Whereas the signalling motive is usually selfish (SHOWING YOU CARE, being a hero, selflessly warning others of the danger—etc).
So—if the signalling theory is accepted—people are less likely to believe there is altruism underlying the signal any more (because there isn’t any). It will seem fake—the mere appearance of altruism.
The signalling theory is unlikely to appeal to those sending the signals. It wakes up their audience, and reduces the impact of the signal.
This seems correct. Do people object on style? Is it a repost? Off topic?
I think it’s bad form to accuse other people of being insincere without clearly defending your remarks. By claiming that the only reason anyone cares about existential risk is signalling, Tim is saying that a lot of people who appear very serious about X-risk reduction are either lying or fooling themselves. I know many altruists who have acted in a way consistent with being genuinely concerned about the future, and I don’t see why I should take Tim’s word over theirs. It certainly isn’t the “most obvious answer.”
I also don’t like this claim that people are likely to behave worse when they think they’re in impending danger, because again, I don’t agree that it’s intuitive, and no evidence is provided. It also isn’t sufficient; maybe some risks are important enough that they ought to be addressed even if addressing them has bad cultural side effects. I know that the SIAI people, at least, would definitely put uFAI in this category without a second thought.
Hm, I didn’t get that out of timtyler’s post (just voted up). He didn’t seem to be saying, “Each and every person interested in this topic is doing it to signal status”, but rather, “Hey, our minds aren’t wired up to care about this stuff unless maybe it signals”—which doesn’t seem all that objectionable.
DNDV (did not down vote). Sure signalling has a lot to do with it, the type of signalling he suggests doesn’t ring true with what I have see of most peoples behaviour. We do not seem to be great proselytisers most of the time.
The ancient circuits that x-risk triggers in me are those of feeling important, of being a player in the tribes future with the benefits that that entails. Of course I won’t get the women if I eventually help save humanity, but my circuits that trigger on “important issues” don’t seem to know that. In short by trying to deal with important issues I am trying to signal a raised status.
Ok, so people don’t like the implication of either the evo-psych argument, or the signaling argument. They both seem plausible, if speculative.
I didn’t say that “the only reason anyone cares about existential risk is signalling”. I was mostly trying to offer an explanation for the observed fact that relatively few give the matter much thought.
I was raising the issue of whether typical humans behave better or worse—if they become convinced that THE END IS NIGH. I don’t know the answer to that. I don’t know of much evidence on the topic. Is there any evidence that proclaiming the END OF THE WORLD is at hand has a net positive effect? If not, then why are some so keen to do it—if not for signalling and marketing purposes?
I thought people here were compatibilists. Saying that someone does something of their own free will is compatible with saying that their actions are determined. Similarly, saying that they are genuinely concerned is compatible with saying that their expressions of concern arise (causally) from “signaling”.
That’s what Tim could have said. His post may have got a better reception if he left off:
I mean, I most certainly do care and the reasons are obvious. p(wedrifid survives | no human survives) = 0
What I mean is things like:
“Citation Index suggests that virtually nothing has been written about the cost effectiveness of reducing human extinction risks,” and Nick Bostrom and Anders Sandberg noted, in a personal communication, that there are orders of magnitude more papers on coleoptera—the study of beetles—than “human extinction.” Anyone can confirm this for themselves with a Google Scholar search: coleoptera gets 245,000 hits, and “human extinction” gets fewer than 1,200.”
http://www.good.is/post/our-delicate-future-handle-with-care/
I am not saying that nobody cares. The issue was raised because you said:
...and someone disagreed!!!
People do care about other things. They mostly care about other things. And the reason for that is pretty obvious—if you think about it.
Wow… this was my tangent? Then “WOO! Whatever point I was initially making!”, or something.
The common complaint here is that the signalled motive is usually wonderful and altruistic—in this case SAVING THE WORLD for everyone. Whereas the signalling motive is usually selfish (SHOWING YOU CARE, being a hero, selflessly warning others of the danger—etc).
So—if the signalling theory is accepted—people are less likely to believe there is altruism underlying the signal any more (because there isn’t any). It will seem fake—the mere appearance of altruism.
The signalling theory is unlikely to appeal to those sending the signals. It wakes up their audience, and reduces the impact of the signal.