You pretend to fail to see connections between the SIAI and an END OF THE WORLD cult—but it isn’t a terribly convincing act.
No, I see it, look further, and find the model lacking in explanatory power. It selectively leaves out all kinds of useful information that I can use to control my anticipations.
Hmuh, I guess we won’t be able to make progress, ’cuz I pretty much wholeheartedly agree with Vladimir when he says:
This whole “outside view” methodology, where you insist on arguing from ignorance even where you have additional knowledge, is insane (outside of avoiding the specific biases such as planning fallacy induced by making additional detail available to your mind, where you indirectly benefit from basing your decision on ignorance).
and Nick Tarleton when he says:
We all already know about this pattern match. Its reiteration is boring and detracts from the conversation.
No, I see it, look further, and find the model lacking in explanatory power. It selectively leaves out all kinds of useful information that I can use to control my anticipations.
all of their predictions of the end of the world were complete failures.
If they weren’t, we wouldn’t be here to see the failure.
It therefore seems to me that using this to “disprove” an end-of-the-world claim makes as much sense as someone trying to support a theory by saying, “They laughed at Galileo, too!”
IOW, you are simply placing the prediction in a certain outside-view class, without any particular justification. You could just as easily put SIAI claims in the class of “predictions of disaster that were averted by hard work”, and with equal justification. (i.e., none that you’ve given!)
[Note: this comment is neither pro-SIAI nor anti-SIAI, nor any comment on the probability of their claims being in any particular class. I’m merely anti-arguments-that-are-information-free. ;-) ]
The argument is not information free. It is just lower on information than implied. If people had never previously made predictions of disaster and everything else was equal then that tells us a different thing than if humans predicted disaster every day. This is even after considering selection effects. I believe this applies somewhat even considering the possibility of dust.
Uh, it wasn’t given as an “argument” in the first place. Evidence which does more strongly relate to p(DOOM) includes the extent to which we look back and see the ashes of previous failed technological civilisations, and past major mishaps. I go into all this in my DOOM video.
No, wait, there’s still something I just don’t understand. In a lot of your comments it seems you do a good job of analyzing the responses of ‘normal people’ to existential risks: they’re really more interested in lipstick, food, and sex, et cetera. And I’m with you there, evolution hasn’t hardwired us with a ‘care about low probabilities of catastrophe’ desire; the problem wasn’t really relevant in the EEA, relatively speaking.
But then it seems like you turn around and do this weird ‘ought-from-is’ operation from evolution and ‘normal people’ to how you should engage in epistemic rationality, and that’s where I completely lose you. It’s like you’re using two separate but to me equally crazy ought-from-is heuristics. The first goes like ‘Evolution didn’t hard code me with a desire to save the world, I guess I don’t actually really want to save the world then.’ And the second one is weirder and goes more like ‘Oh, well, evolution didn’t directly code good epistemology into my brain, it just gave me this comparatively horrible analogical reasoning module; I guess I don’t really want good epistemic rationality then’.
It ends up looking like you’re using some sort of insane bizarre sister of the outside view that no one can relate with.
It’s like you’re perfectly describing the errors in most peoples’ thinking but then at the end right when you should say “Haha, those fools”, you instead completely swerve and endorse the errors, then righteously champion them for (evolutionary psychological?) reasons no one can understand.
“‘Oh, well, evolution didn’t directly code good epistemology into my brain, it just gave me this comparatively horrible analogical reasoning module; I guess I don’t really want good epistemic rationality then’.”
...looks like it bears very little resemblance to anything I have ever said. I don’t know where you are getting it from.
Perhaps it is to do with the idea that not caring about THE END OF THE WORLD is normally a rational action for a typical gene-propagating agent.
Such agents should normally be concerned with having more babies than their neighbours do—and should not indulge in much paranoia about THE END OF THE WORLD. That is not sticking with poor quality cognition, it is often the correct thing to do for an agent with those aims.
If p(DOOM) gets really large, the correct strategy might change. If it turns into a collective action problem with punishment for free riders, the correct strategy might change. However, often THE END OF THE WORLD can be rationally perceived to be someone else’s problem. Expending resources fighting DOOM usually just means you get gradually squeezed out of the gene pool.
The DOOM enthusiasts typically base their arguments on utilitarianism. A biologist’s perspective on that is that it is sometimes an attempt to signal unselfishness—albeit usually a rather unbelievable one—and sometimes an attempt to manipulate others into parting withe their cash.
...looks like it bears very little resemblance to anything I have ever said. I don’t know where you are getting it from.
Looking back I think I read more into your comments than was really there; I apologize.
Such agents should normally be concerned with having more babies than their neighbours do—and should not indulge in much paranoia about THE END OF THE WORLD. That is not sticking with poor quality cognition, it is often the correct thing to do for an agent with those aims.
I agree here. The debate is over whether or not the current situation is normal.
However, often THE END OF THE WORLD can be rationally perceived to be someone else’s problem.
Tentatively agreed. Normally, even if nanotech’s gonna kill everyone, you’re not able to do much about it anyway. But I’m not sure why you bring up “Expending resources fighting DOOM usually just means you get gradually squeezed out of the gene pool.” when most people aren’t at all trying to optimize the amount of copies of their genes in the gene pool.
The DOOM enthusiasts typically base their arguments on utilitarianism. A biologist’s perspective on that is that it is sometimes an attempt to signal unselfishness—albeit usually a rather unbelievable one—and sometimes an attempt to manipulate others into parting withe their cash.
Generally this is true, especially before science was around to make such meme pushing low status. But it’s also very true of global warming paranoia, which is high status even among intellectuals for some reason. (I should probably try to figure out why.) I readily admit that certain values of outside view will jump from that to ‘and so all possible DOOM-pushing groups are just trying to signal altruism or swindle people’—but rationality should help you win, and a sufficiently good rationalist should trust themselves to try and beat the outside view here.
So maybe instead of saying ‘poor epistemology’ I should say ‘odd emphasis on outside view when generally people trust their epistemology better than that beyond a certain point of perceived rationality in themselves’.
The primary thing I find objectionable about your commenting on this subject is the persistent violation of ordinary LW etiquette, e.g. by REPEATEDLY SHOUTING IN ALL CAPS and using ad hominem insults, e.g. “groupies.”
I’m sorry to hear about your issues with my writing style :-(
I have been consistently capitalising DOOM—and a few related terms—for quite a while. I believe these terms deserve special treatment—in accordance with how important everybody says they are—and ALL-CAPS is the most portable form of emphasis across multiple sites and environments. For the intended pronunciation of phrases like DOOM, SOON, see my DOOM video. It is not shouting. I rate the effect as having net positive value in the context of the intended message—and will put up with your gripes about it.
As for “groupies”—that does seem like an apt term to me. There is the charismatic leader—and then there is his fan base—which seems to have a substantial element of young lads. Few other terms pin down the intended meaning as neatly. I suppose I could have said “young fan base”—if I was trying harder to avoid the possibility of causing offense. Alas, I am poorly motivated to bother with such things. Most of the “insiders” are probably going to hate me anyway—because of my message—and the “us” and “them” tribal mentality.
Did you similarly give Yudkowsky a public ticking-off when he recently delved into the realm of BOLD ALL CAPS combined with ad-hominen insults? His emphasis extended to whole paragraphs—and his insults were considerably more personal—as I recall. Or am I getting special treatment?
I have been consistently capitalising DOOM—and a few related terms—for quite a while. I believe these terms deserve special treatment—in accordance with how important everybody says they are—and all-caps is the most portable form of emphasis across multiple sites and environments.
May I suggest as a matter of style that “Doom” more accurately represents your intended meaning of specific treatment and usage as a noun that isn’t just a description? Since ALL CAPS has the interpretation of mere shouting you fail to communicate your meaning effectively if you use all caps instead of Title Case in this instance. Consider ‘End Of The World’ as a superior option.
Did you similarly give Yudkowsky a public ticking-off when he recently delved into the realm of BOLD ALL CAPS combined with ad-hominen insults? His emphasis extended to whole paragraphs—and his insults were considerably more personal—as I recall. Or am I getting special treatment?
Let’s be honest. If we’re going to consider that incident as an admissible tu quoque to any Yudkowskian then we could justify just about any instance of obnoxious social behaviour thereby. I didn’t object to your comments here simply because I didn’t consider them out of line on their own merits. I would have no qualms about criticising actual bad behaviour just because Eliezer acted like a douche.
Mind you I am not CarlShuman and the relevance of hypocrisy to Carl’s attempt of a status slap is far greater than if it was an attempt by me. Even so you could replace “Or am I getting special treatment?” with “Or are you giving me special treatment?” and so reduce the extent that you signal that it is ok to alienate or marginalise you.
Title Caps would be good too—though “DOOM” fairly often appears at the start of a sentence—and there it would be completely invisible. “Doom” is milder. Maybe “DOOM” is too much—but I can live with it. After all, this is THE END OF THE WORLD we are talking about!!! That is pretty ###### important!!!
If you check with the THE END IS NIGH placards, they are practically all in ALL CAPS. I figure those folk are the experts in this area—and that by following their traditions, I am utilizing their ancient knowledge and wisdom on the topic of how best to get this critical message out.
A little shouting may help ensure that the DOOM message reaches distant friends and loved ones...
A little shouting may help ensure that the DOOM message reaches distant friends and loved ones...
Or utterly ignored because people think you’re being a tool. One or the other. (I note that this is an unfortunate outcome because apart from this kind of pointless contrariness people are more likely to acknowledge what seem to be valid points in your response to Carl. I don’t like seeing the conversational high ground going to those who haven’t particularly earned it in the context.)
Well, my CAPS are essentially a parody. If the jester capers in the same manner as the noble, there will often be some people who will think that he is dancing badly—and not understand what is going on.
You ignored the word ‘repetitive.’ As you say, you have a continuing policy of carelessness towards causing offense, i.e. rudeness. And no, I don’t think that the comment you mention was appropriate either (versus off-LW communication), but given that it was deleted I didn’t see reason to make a further post about it elsewhere. Here are some recent comment threads in which I called out Eliezer and others for ad hominem attacks.
...not as much as you ignored the words “consistently” and “for quite a while”.
I do say what I mean. For instance, right now you are causing me irritation—by apparently pointlessly wasting my time and trying to drag me into the gutter. On the one hand, thanks for bothering with feedback… …but on the other, please go away now, Carl—and try to find something more useful to do than bickering here with me.
I don’t think it’s that. I think it’s just annoyance at perceived persistently bad epistemology in people making the comparison over and over again as if each iteration presented novel predictions with which to constrain anticipation.
Everyone knows the analogy exists. It’s just a matter of looking at the details to see if that has any bearing on whether or not SIAI is a useful organization or not.
You asked: “What makes that comparison spring to mind?” when I mentioned cults.
Hopefully, you now have your answer—for one thing, they are like an END OF THE WORLD cult—in that they use fear of THE END OF THE WORLD as a publicity and marketing tool.
Such marketing has a long tradition behind it—e.g see the Daisy Ad.
Tyler: If there really is “bad epistemology”, feel free to show where.
Nesov: Also, FOOM rhymes with DOOM. There!
And this response was upvoted … why? This is supposed to be a site where rational discourse is promoted, not a place like Pharyngula or talk.origins where folks who disagree with the local collective worldview get mocked by insiders who then congratulate each other on their cleverness.
I voted it up. It was short, neat, and made several points.
Probably the main claim is that that the relationship between the SIAI and previous END OF THE WORLD outfits is a meaningless surface resemblance.
My take of the issue is that DOOM is—in part—a contagious mind-virus, with ancient roots—which certain “vulnerable” people are inclined to spread around—regardless of whether it makes much sense or not.
With the rise of modern DOOM “outfits”, we need to understand the sociological and memetic aspects of these things all the more:
Will we see more cases of “DOOM exploitation”—from those out to convert fear of the imminent end into power, wealth, fame or sex?
Will a paranoid society take steps to avoid the risks? Will it freeze like a rabbit in the headlights? Or will it result in more looting and rape cases?
What is the typical life trajectory of those who get involved with these outfits? Do they go on to become productive members of society? Or do they wind up having nightmares about THE END OF THE WORLD—while neglecting their interpersonal relationships and personal hygene—unless their friends and family stage an “intervention”?
...and so on.
Rational agents should understand the extent to which they are infected by contagious mind viruses—that spread for their own benefit and without concern for the welfare of their hosts. DOOM definitely has the form of such a virus. The issue as I see it is: how much of the observed phenomenon of the of modern-day DOOM “outfits” does it explain?
To study this whole issue, previous doomsday cults seem like obvious and highly-relevant data points to me. In some cases their DOOM was evidently a complete fabrication. They provide pure examples of fake DOOM—exactly the type of material a sociologist would need to understand that aspect of the DOOM-mongering phenomeon.
I agree that it’s annoying when people are mocked for saying something they didn’t say. But Nesov was actually making an implicit argument here, not just having fun: he was pointing out that timtyler’s analogies tend to be surface-level and insubstantive. The kind of thing that I’ve seen on Pharyngula are instead unjustified ad hominem attacks that don’t shed any light on possible flaws in the poster’s arguments. That said, I think Nesov’s comment was flirting with the line.
“Way past that” meaning “so exasperated with Tim that rational discourse seems just not worth it”? Hey, I can sympathize. Been there, done that.
But still, it annoys me when people are attacked by mocking something that they didn’t say, but that their caricature should have said (in a more amusing branch of reality).
It annoys me more when that behavior is applauded.
And it strikes me as deeply ironic when it happens here.
But still, it annoys me when people are attacked by mocking something that they didn’t say, but that their caricature should have said (in a more amusing branch of reality)
That’s very neatly put.
I’m not dead certain it’s a fair description of Vladimir Nesov said, but describes a lot of behavior I’ve seen. And there’s a parallel version about the branches of reality which allow for easier superiority and/or more outrage.
The error Tim makes time and again is finding shallow analogies between activity of people concerned with existential risk and doomsday cults, and loudly announcing them, lamenting that it’s not proper that this important information is so rarely considered. Yet the analogies are obvious and obviously irrelevant. My caricature simply followed the pattern.
Talking about obviousness as if it was inherent in a conclusion is typical mind projection fallacy. What it generally implies (and what I think you mean) is that any sufficiently rational person would see it; but when lots of people don’t see it, calling it obvious is against social convention (it’s claiming higher rationality and thus social status than your audience). In this case I think that to your average reader the analogies aren’t obviously irrelevant, even though I personally do find them obviously irrelevant.
When you’re trying to argue that something is the case (ie. that the analogies are irrelevant) the difference between what you are arguing being OBVIOUS and it merely being POSSIBLE is extremely vast.
You made a claim that they were obviously irrelevant.
The respondant expressed uncerainty as to their irrelevance “They may be irrelevant.” as opposed to the certainty in “The analogies are obvious.” and “They are not obviously irrelevant.”
That is a distinction between something being claimed as obvious and the same thing being seen as doubtful.
If you do not wish to explain a point there are many better options* than inaccurately calling it obvious. For example, linking to a previous explanation.
*in rationality terms. In argumentation terms, these techniques are often inferior to the technique of the emperor’s tailors
The error Tim makes time and again is finding shallow analogies between activity of people concerned with existential risk and doomsday cults, and loudly announcing them, lamenting that it’s not proper that this important information is so rarely considered. Yet the analogies are obvious and obviously irrelevant.
Uh, they are not “obviously irrelevant”. The SIAI behaves a bit like other DOOM-mongering organisations have done—and a bit like other FUD marketing organisations have done.
Understanding the level of vulnerability of the human psyche to the DOOM virus is a pretty critical part of assessing what level of paranoia about the topic is reasonable.
It is, in fact very easy to imagine how a bunch of intrepid “friendly folk” who think they are out to save the world—might—in the service of their cause—exaggerate the risks, in the hope of getting attention, help and funds.
Indeed, such an organisation is most likely to be founded by those who have extreme views about the risks, attract others who share similar extreme views, and then have a hard time convincing the rest of the world that they are, in fact, correct.
There are sociological and memetic explanations for the “THE END IS NIGH” phenomenon that are more-or-less independent of the actual value of p(DOOM). I think these should be studied more, and applied to this case—so that we can better see what is left over.
There has been some existing study of DOOM-mongering. There is also the associated Messiah complex—an intense desire to save others. With the rise of the modern doomsday “outfits”, I think more study of these phenomenon is warranted.
Sometimes it is fear that is the mind-killer. FUD marketing exploits this to help part marks from their money. THE END OF THE WORLD is big and scary—a fear superstimulus—and there is a long tradition of using it to move power around and achieve personal ends—and the phenomena spreads around virally.
I appreciate that this will probably turn the stomachs of the faithful—but without even exploring the issue, you can’t competently defend the community against such an analysis—because you don’t know to what extent it is true—because you haven’t even looked into it.
No, I see it, look further, and find the model lacking in explanatory power. It selectively leaves out all kinds of useful information that I can use to control my anticipations.
Hmuh, I guess we won’t be able to make progress, ’cuz I pretty much wholeheartedly agree with Vladimir when he says:
and Nick Tarleton when he says:
“This one is right” for example. ;)
The groupies never seem to like the comparison with THE END OF THE WORLD cults. Maybe it is the “cult” business—or maybe it is because all of their predictions of the end of the world were complete failures.
If they weren’t, we wouldn’t be here to see the failure.
It therefore seems to me that using this to “disprove” an end-of-the-world claim makes as much sense as someone trying to support a theory by saying, “They laughed at Galileo, too!”
IOW, you are simply placing the prediction in a certain outside-view class, without any particular justification. You could just as easily put SIAI claims in the class of “predictions of disaster that were averted by hard work”, and with equal justification. (i.e., none that you’ve given!)
[Note: this comment is neither pro-SIAI nor anti-SIAI, nor any comment on the probability of their claims being in any particular class. I’m merely anti-arguments-that-are-information-free. ;-) ]
The argument is not information free. It is just lower on information than implied. If people had never previously made predictions of disaster and everything else was equal then that tells us a different thing than if humans predicted disaster every day. This is even after considering selection effects. I believe this applies somewhat even considering the possibility of dust.
Uh, it wasn’t given as an “argument” in the first place. Evidence which does more strongly relate to p(DOOM) includes the extent to which we look back and see the ashes of previous failed technological civilisations, and past major mishaps. I go into all this in my DOOM video.
No, wait, there’s still something I just don’t understand. In a lot of your comments it seems you do a good job of analyzing the responses of ‘normal people’ to existential risks: they’re really more interested in lipstick, food, and sex, et cetera. And I’m with you there, evolution hasn’t hardwired us with a ‘care about low probabilities of catastrophe’ desire; the problem wasn’t really relevant in the EEA, relatively speaking.
But then it seems like you turn around and do this weird ‘ought-from-is’ operation from evolution and ‘normal people’ to how you should engage in epistemic rationality, and that’s where I completely lose you. It’s like you’re using two separate but to me equally crazy ought-from-is heuristics. The first goes like ‘Evolution didn’t hard code me with a desire to save the world, I guess I don’t actually really want to save the world then.’ And the second one is weirder and goes more like ‘Oh, well, evolution didn’t directly code good epistemology into my brain, it just gave me this comparatively horrible analogical reasoning module; I guess I don’t really want good epistemic rationality then’.
It ends up looking like you’re using some sort of insane bizarre sister of the outside view that no one can relate with.
It’s like you’re perfectly describing the errors in most peoples’ thinking but then at the end right when you should say “Haha, those fools”, you instead completely swerve and endorse the errors, then righteously champion them for (evolutionary psychological?) reasons no one can understand.
Can you help me understand?
“‘Oh, well, evolution didn’t directly code good epistemology into my brain, it just gave me this comparatively horrible analogical reasoning module; I guess I don’t really want good epistemic rationality then’.”
...looks like it bears very little resemblance to anything I have ever said. I don’t know where you are getting it from.
Perhaps it is to do with the idea that not caring about THE END OF THE WORLD is normally a rational action for a typical gene-propagating agent.
Such agents should normally be concerned with having more babies than their neighbours do—and should not indulge in much paranoia about THE END OF THE WORLD. That is not sticking with poor quality cognition, it is often the correct thing to do for an agent with those aims.
If p(DOOM) gets really large, the correct strategy might change. If it turns into a collective action problem with punishment for free riders, the correct strategy might change. However, often THE END OF THE WORLD can be rationally perceived to be someone else’s problem. Expending resources fighting DOOM usually just means you get gradually squeezed out of the gene pool.
The DOOM enthusiasts typically base their arguments on utilitarianism. A biologist’s perspective on that is that it is sometimes an attempt to signal unselfishness—albeit usually a rather unbelievable one—and sometimes an attempt to manipulate others into parting withe their cash.
Looking back I think I read more into your comments than was really there; I apologize.
I agree here. The debate is over whether or not the current situation is normal.
Tentatively agreed. Normally, even if nanotech’s gonna kill everyone, you’re not able to do much about it anyway. But I’m not sure why you bring up “Expending resources fighting DOOM usually just means you get gradually squeezed out of the gene pool.” when most people aren’t at all trying to optimize the amount of copies of their genes in the gene pool.
Generally this is true, especially before science was around to make such meme pushing low status. But it’s also very true of global warming paranoia, which is high status even among intellectuals for some reason. (I should probably try to figure out why.) I readily admit that certain values of outside view will jump from that to ‘and so all possible DOOM-pushing groups are just trying to signal altruism or swindle people’—but rationality should help you win, and a sufficiently good rationalist should trust themselves to try and beat the outside view here.
So maybe instead of saying ‘poor epistemology’ I should say ‘odd emphasis on outside view when generally people trust their epistemology better than that beyond a certain point of perceived rationality in themselves’.
The primary thing I find objectionable about your commenting on this subject is the persistent violation of ordinary LW etiquette, e.g. by REPEATEDLY SHOUTING IN ALL CAPS and using ad hominem insults, e.g. “groupies.”
I’m sorry to hear about your issues with my writing style :-(
I have been consistently capitalising DOOM—and a few related terms—for quite a while. I believe these terms deserve special treatment—in accordance with how important everybody says they are—and ALL-CAPS is the most portable form of emphasis across multiple sites and environments. For the intended pronunciation of phrases like DOOM, SOON, see my DOOM video. It is not shouting. I rate the effect as having net positive value in the context of the intended message—and will put up with your gripes about it.
As for “groupies”—that does seem like an apt term to me. There is the charismatic leader—and then there is his fan base—which seems to have a substantial element of young lads. Few other terms pin down the intended meaning as neatly. I suppose I could have said “young fan base”—if I was trying harder to avoid the possibility of causing offense. Alas, I am poorly motivated to bother with such things. Most of the “insiders” are probably going to hate me anyway—because of my message—and the “us” and “them” tribal mentality.
Did you similarly give Yudkowsky a public ticking-off when he recently delved into the realm of BOLD ALL CAPS combined with ad-hominen insults? His emphasis extended to whole paragraphs—and his insults were considerably more personal—as I recall. Or am I getting special treatment?
May I suggest as a matter of style that “Doom” more accurately represents your intended meaning of specific treatment and usage as a noun that isn’t just a description? Since ALL CAPS has the interpretation of mere shouting you fail to communicate your meaning effectively if you use all caps instead of Title Case in this instance. Consider ‘End Of The World’ as a superior option.
Let’s be honest. If we’re going to consider that incident as an admissible tu quoque to any Yudkowskian then we could justify just about any instance of obnoxious social behaviour thereby. I didn’t object to your comments here simply because I didn’t consider them out of line on their own merits. I would have no qualms about criticising actual bad behaviour just because Eliezer acted like a douche.
Mind you I am not CarlShuman and the relevance of hypocrisy to Carl’s attempt of a status slap is far greater than if it was an attempt by me. Even so you could replace “Or am I getting special treatment?” with “Or are you giving me special treatment?” and so reduce the extent that you signal that it is ok to alienate or marginalise you.
Title Caps would be good too—though “DOOM” fairly often appears at the start of a sentence—and there it would be completely invisible. “Doom” is milder. Maybe “DOOM” is too much—but I can live with it. After all, this is THE END OF THE WORLD we are talking about!!! That is pretty ###### important!!!
If you check with the THE END IS NIGH placards, they are practically all in ALL CAPS. I figure those folk are the experts in this area—and that by following their traditions, I am utilizing their ancient knowledge and wisdom on the topic of how best to get this critical message out.
A little shouting may help ensure that the DOOM message reaches distant friends and loved ones...
Or utterly ignored because people think you’re being a tool. One or the other. (I note that this is an unfortunate outcome because apart from this kind of pointless contrariness people are more likely to acknowledge what seem to be valid points in your response to Carl. I don’t like seeing the conversational high ground going to those who haven’t particularly earned it in the context.)
Well, my CAPS are essentially a parody. If the jester capers in the same manner as the noble, there will often be some people who will think that he is dancing badly—and not understand what is going on.
There will be others who understand perfectly and think he’s doing a mediocre job of it.
You ignored the word ‘repetitive.’ As you say, you have a continuing policy of carelessness towards causing offense, i.e. rudeness. And no, I don’t think that the comment you mention was appropriate either (versus off-LW communication), but given that it was deleted I didn’t see reason to make a further post about it elsewhere. Here are some recent comment threads in which I called out Eliezer and others for ad hominem attacks.
...not as much as you ignored the words “consistently” and “for quite a while”.
I do say what I mean. For instance, right now you are causing me irritation—by apparently pointlessly wasting my time and trying to drag me into the gutter. On the one hand, thanks for bothering with feedback… …but on the other, please go away now, Carl—and try to find something more useful to do than bickering here with me.
I don’t think it’s that. I think it’s just annoyance at perceived persistently bad epistemology in people making the comparison over and over again as if each iteration presented novel predictions with which to constrain anticipation.
If there really is “bad epistemology”, feel free to show where.
There really is an analogy between the SIAI and various THE END OF THE WORLD cults—as I previously spelled out here.
You might like to insinuate that I am reading more into the analogy than it deserves—but basically, you don’t have any case there that I can detect.
Everyone knows the analogy exists. It’s just a matter of looking at the details to see if that has any bearing on whether or not SIAI is a useful organization or not.
You asked: “What makes that comparison spring to mind?” when I mentioned cults.
Hopefully, you now have your answer—for one thing, they are like an END OF THE WORLD cult—in that they use fear of THE END OF THE WORLD as a publicity and marketing tool.
Such marketing has a long tradition behind it—e.g see the Daisy Ad.
Also, FOOM rhymes with DOOM. There!
And this response was upvoted … why? This is supposed to be a site where rational discourse is promoted, not a place like Pharyngula or talk.origins where folks who disagree with the local collective worldview get mocked by insiders who then congratulate each other on their cleverness.
I voted it up. It was short, neat, and made several points.
Probably the main claim is that that the relationship between the SIAI and previous END OF THE WORLD outfits is a meaningless surface resemblance.
My take of the issue is that DOOM is—in part—a contagious mind-virus, with ancient roots—which certain “vulnerable” people are inclined to spread around—regardless of whether it makes much sense or not.
With the rise of modern DOOM “outfits”, we need to understand the sociological and memetic aspects of these things all the more:
Will we see more cases of “DOOM exploitation”—from those out to convert fear of the imminent end into power, wealth, fame or sex?
Will a paranoid society take steps to avoid the risks? Will it freeze like a rabbit in the headlights? Or will it result in more looting and rape cases?
What is the typical life trajectory of those who get involved with these outfits? Do they go on to become productive members of society? Or do they wind up having nightmares about THE END OF THE WORLD—while neglecting their interpersonal relationships and personal hygene—unless their friends and family stage an “intervention”?
...and so on.
Rational agents should understand the extent to which they are infected by contagious mind viruses—that spread for their own benefit and without concern for the welfare of their hosts. DOOM definitely has the form of such a virus. The issue as I see it is: how much of the observed phenomenon of the of modern-day DOOM “outfits” does it explain?
To study this whole issue, previous doomsday cults seem like obvious and highly-relevant data points to me. In some cases their DOOM was evidently a complete fabrication. They provide pure examples of fake DOOM—exactly the type of material a sociologist would need to understand that aspect of the DOOM-mongering phenomeon.
I agree that it’s annoying when people are mocked for saying something they didn’t say. But Nesov was actually making an implicit argument here, not just having fun: he was pointing out that timtyler’s analogies tend to be surface-level and insubstantive. The kind of thing that I’ve seen on Pharyngula are instead unjustified ad hominem attacks that don’t shed any light on possible flaws in the poster’s arguments. That said, I think Nesov’s comment was flirting with the line.
In the case of Tim in particular, I’m way past that.
“Way past that” meaning “so exasperated with Tim that rational discourse seems just not worth it”? Hey, I can sympathize. Been there, done that.
But still, it annoys me when people are attacked by mocking something that they didn’t say, but that their caricature should have said (in a more amusing branch of reality).
It annoys me more when that behavior is applauded.
And it strikes me as deeply ironic when it happens here.
That’s very neatly put.
I’m not dead certain it’s a fair description of Vladimir Nesov said, but describes a lot of behavior I’ve seen. And there’s a parallel version about the branches of reality which allow for easier superiority and/or more outrage.
The error Tim makes time and again is finding shallow analogies between activity of people concerned with existential risk and doomsday cults, and loudly announcing them, lamenting that it’s not proper that this important information is so rarely considered. Yet the analogies are obvious and obviously irrelevant. My caricature simply followed the pattern.
The analogies are obvious. They may be irrelevant. They are not obviously irrelevant.
Too fine a distinction to argue, wouldn’t you agree?
Talking about obviousness as if it was inherent in a conclusion is typical mind projection fallacy. What it generally implies (and what I think you mean) is that any sufficiently rational person would see it; but when lots of people don’t see it, calling it obvious is against social convention (it’s claiming higher rationality and thus social status than your audience). In this case I think that to your average reader the analogies aren’t obviously irrelevant, even though I personally do find them obviously irrelevant.
When you’re trying to argue that something is the case (ie. that the analogies are irrelevant) the difference between what you are arguing being OBVIOUS and it merely being POSSIBLE is extremely vast.
You seem to confuse the level of certainty with difficulty of discerning it.
You made a claim that they were obviously irrelevant.
The respondant expressed uncerainty as to their irrelevance “They may be irrelevant.” as opposed to the certainty in “The analogies are obvious.” and “They are not obviously irrelevant.”
That is a distinction between something being claimed as obvious and the same thing being seen as doubtful.
If you do not wish to explain a point there are many better options* than inaccurately calling it obvious. For example, linking to a previous explanation.
*in rationality terms. In argumentation terms, these techniques are often inferior to the technique of the emperor’s tailors
Uh, they are not “obviously irrelevant”. The SIAI behaves a bit like other DOOM-mongering organisations have done—and a bit like other FUD marketing organisations have done.
Understanding the level of vulnerability of the human psyche to the DOOM virus is a pretty critical part of assessing what level of paranoia about the topic is reasonable.
It is, in fact very easy to imagine how a bunch of intrepid “friendly folk” who think they are out to save the world—might—in the service of their cause—exaggerate the risks, in the hope of getting attention, help and funds.
Indeed, such an organisation is most likely to be founded by those who have extreme views about the risks, attract others who share similar extreme views, and then have a hard time convincing the rest of the world that they are, in fact, correct.
There are sociological and memetic explanations for the “THE END IS NIGH” phenomenon that are more-or-less independent of the actual value of p(DOOM). I think these should be studied more, and applied to this case—so that we can better see what is left over.
There has been some existing study of DOOM-mongering. There is also the associated Messiah complex—an intense desire to save others. With the rise of the modern doomsday “outfits”, I think more study of these phenomenon is warranted.
Sometimes it is fear that is the mind-killer. FUD marketing exploits this to help part marks from their money. THE END OF THE WORLD is big and scary—a fear superstimulus—and there is a long tradition of using it to move power around and achieve personal ends—and the phenomena spreads around virally.
I appreciate that this will probably turn the stomachs of the faithful—but without even exploring the issue, you can’t competently defend the community against such an analysis—because you don’t know to what extent it is true—because you haven’t even looked into it.