herbal-spiritual-alternative medicine / conventional medicine / Robin Hanson
Can you link to a Robin Hanson article on this topic so that people who aren’t already familiar with his opinions on this subject (read: LW newbies like me) know what this is about?
Or alternately, I propose this sequence:
regular medical care by default / alt-med / regular medical care because alt-med is unscientific
regular medical care by default / alt-med / regular medical care because alt-med is unscientific
This is more in line with the other examples. I second the request for an edit. Yvain, you could add “Robin Hanson” to the fourth slot: it would kinda mess up your triplets, but with the justification that it’d be a funny example of just how awesomely contrarian Robin Hanson is. :D
Also, Yvain, you happen to list what people here would deem more-or-less correct contrarian clusters in your triplet examples. But I have no idea how often the meta-level contrarian position is actually correct, and I fear that I might get too much of a kick out of the positions you list in your triplets simply because my position is more meta and I associate metaness with truth when in reality it might be negatively correlated. Perhaps you could think of a few more-wrong meta-contrarian positions to balance what may be a small affective bias?
Half-agree with you, as none of the 18 positions are ‘correct’, but I don’t know what you mean by ‘useless’. Instead of generalizing I’ll list my personal positions:
KKK-style racist / politically correct liberal / “but there are scientifically proven genetic differences”
If I failed to notice that there are scientifically proven genetic differences I would be missing a far more important part of reality (evolutionary psychology and the huge effects of evolution in the last 20,000 years) than if I failed to notice that being a bigot was bad and impeded moral progress. That said, if most people took this position, it’d result in a horrible tragedy of the commons situation, which is why most social scientists cooperate on the ‘let’s not promote racism’ dilemma. I’m not a social scientist so I get to defect and study some of the more interesting aspects of human evolutionary biology.
misogyny / women’s rights movement / men’s rights movement
No opinion. Women seem to be doing perfectly fine. Men seem to get screwed over by divorce laws and the like. Tentatively agree more with third level but hey, I’m pretty ignorant here.
conservative / liberal / libertarian
What can I say, it’s politics. Libertarians in charge would mean more drugs and ethically questionable experiments of the sort I promote, as well as a lot more focus on the risks and benefits of technology. Since the Singularity trumps everything else policy-wise I have to root for the libertarian team here, even if I find them obnoxiously pretentious. (ETA: Actually, maybe more libertarians would just make it more likely that the ‘Yeah yeah Singularity AI transhumanism wooooo!’ meme would get bigger which would increase existential risk. So uh… never mind, I dunno.)
herbal-spiritual-alternative medicine / conventional medicine / Robin Hanson
Too ignorant to comment. My oxycodone and antiobiotics sure did me good when I got an infection a week ago. My dermatologist drugs didn’t help much with my acne. I’ve gotten a few small surgeries which made me better. Overall conventional medicine seems to have helped me a fair bit and costs me little. I don’t even know what Robin Hanson’s claims are, though. A link would be great.
don’t care about Africa / give aid to Africa / don’t give aid to Africa
Okay, anyone who cares about helping people in Africa and can multiply should be giving their money to x-risk charities. Because saving the world also includes saving Africa. Therefore position 3 is essentially correct, but maybe it’s really position 4 (give aid to Earth) that’s the correct one, I dunno.
Obama is Muslim / Obama is obviously not Muslim, you idiot / Patri Friedman5
Um, Patri was just being silly. Obama is obviously not a Muslim in any meaningful sense.
In conclusion, I think that there isn’t any real trend here, but maybe we’re just disputing ways of carving up usefulness? It is subjective after all.
Added: Explanations for downvotes are always welcome. Lately I’ve decided to try less to appear impressive and consistently rational (like Carl Shulman) and try more to throw lots of ideas around for critique, criticism, and development (like Michael Vassar). So although downvotes are useful indicators of where I might have gone wrong, a quick explanatory comment is even more useful and very unlikely to be responded to with indignation or hostility.
KKK-style racist / politically correct liberal / “but there are scientifically proven genetic differences”
If I failed to notice that there are scientifically proven genetic differences I would be missing a far more important part of reality (evolutionary psychology and the huge effects of evolution in the last 20,000 years) than if I failed to notice that being a bigot was bad and impeded moral progress. That said, if most people took this position, it’d result in a horrible tragedy of the commons situation, which is why most social scientists cooperate on the ‘let’s not promote racism’ dilemma. I’m not a social scientist so I get to defect and study some of the more interesting aspects of human evolutionary biology.
Awareness of genetic differences between races constitutes negative knowledge in many cases, that is it leads to anticipations that match the outcomes more badly than they would have otherwise. If everyone suspects that genetically blue-haired people are slightly less intelligent on average for genetic reasons, you want to hire the most intelligent person for a job and after a very long selection process (that other people were involved in) and you are left with two otherwise equally good candidates one blue-haired and one not, the egoistically rational thing is not to pick the non-blue haired person on account of that genetic difference. The other evidence on their intelligence is not independent of the genetic factors that correlate with blue hair, so any such genetic disadvantages are already figured in. If anything you should pick the blue haired person because extreme sample selection bias is likely and any blue haired person still left at the end of the selection process needed to be very intelligent to still be in the race. (so no, this isn’t a tragedy of the commons situation)
It’s pretty much never going to be the case that the blue hair is your best information on someone’s intelligence, even their clothes or style of speech should usually be a better source.
Even for groups “genetic differences” can be pretty misleading, tallness is a strongly heritable trait and nevertheless differences in tallness can easily be dominated by environmental factors.
misogyny / women’s rights movement / men’s rights movement
No opinion. Women seem to be doing perfectly fine. Men seem to get screwed over by divorce laws and the like. Tentatively agree more with third level but hey, I’m pretty ignorant here.
Depends on what is meant with womens and mens right movement, really.
The fact that men are treated unfairly on some issues does not mean that we have overshot in treating women fairly, weighting these off against each other is not productive and everyone should be treated fairly irrespective of gender and other factors, but since unfair treatment due to gender is still existent tracking how treatment varies by gender may still be necessary, though differences in outcome don’t automatically imply unfairness, only that it’s a hypothesis that deserves to be considered.
conservative / liberal / libertarian
What can I say, it’s politics. Libertarians in charge would mean more drugs and ethically questionable experiments of the sort I promote, as well as a lot more focus on the risks and benefits of technology. Since the Singularity trumps everything else policy-wise I have to root for the libertarian team here, even if I find them obnoxiously pretentious. (ETA: Actually, maybe more libertarians would just make it more likely that the ‘Yeah yeah Singularity AI transhumanism wooooo!’ meme would get bigger which would increase existential risk. So uh… never mind, I dunno.)
(Not mentioning tragedy of the commons since non-crazy Libertarians usually agree that government of some level is necessary for those)
Government competence vs. private sector competence is a function of organization size, productive selective pressures, culture etc. and even though the private sector has some natural advantages it doesn’t dominate universally, particularly where functioning markets are difficult to set up (e. g. high speed railway lines). Regulation may be necessary to break out of some Nash equilibriums, and to overcome momentum in some cases (e. g. thermal insulation in building codes, though there should be ways to receive exemptions when sensible). I also don’t see some level of wealth distribution as inherently evil.
herbal-spiritual-alternative medicine / conventional medicine / Robin Hanson
Too ignorant to comment. My oxycodone and antiobiotics sure did me good when I got an infection a week ago. My dermatologist drugs didn’t help much with my acne. I’ve gotten a few small surgeries which made me better. Overall conventional medicine seems to have helped me a fair bit and costs me little. I don’t even know what Robin Hanson’s claims are, though. A link would be great.
Basically: No evidence marginal heath spending improves health and some evidence against, cut US health spending in half.
IMO the most sensible approach would be single payer universal health care for everything that is known to be effective and allowing people to purchase anything safe beyond that.
*don’t care about Africa / give aid to Africa / don’t give aid to Africa
Okay, anyone who cares about helping people in Africa and can multiply should be giving their money to x-risk charities. Because saving the world also includes saving Africa. Therefore position 3 is essentially correct, but maybe it’s really position 4 (give aid to Earth) that’s the correct one, I dunno.
I understood “don’t give aid to Africa” as “don’t give aid to Africa because it’s counterproductive”, which depends on the type of giving, so I would read your position as a position 4.
Obama is Muslim / Obama is obviously not Muslim, you idiot / Patri Friedman5
Um, Patri was just being silly. Obama is obviously not a Muslim in any meaningful sense.
Ok, useless is the wrong word here for position 2, but position 4 would be that it shouldn’t even matter whether he is a Muslim, because there is nothing wrong with being a Muslim in the first place (other than being a theist).
Okay, anyone who cares about helping people in Africa and can multiply should be giving their money to x-risk charities. Because saving the world also includes saving Africa.
But… but… but saving the world doesn’t signal the same affiliations as saving Africa!
My impression is that Hanson’s take on conventional medicine is that half the money spent is wasted. However, I don’t know if he’s been very specific about which half.
The RAND Health Experiment, which he frequently citied study didn’t investigate the benefits of catastrophic medical insurance or that which people pay for from their own pocket, and found the rest useless.
Okay, anyone who cares about helping people in Africa and can multiply should be giving their money to x-risk charities. Because saving the world also includes saving Africa. Therefore position 3 is essentially correct, but maybe it’s really position 4 (give aid to Earth) that’s the correct one, I dunno.
Why is giving money to x-risk charities conducive to saving the world? (I don’t necessarily disagree, but want to see what you have to say to substantiate your claim.) In particular, what’s your response to Holden’s comment #12 at the GiveWell Singularity Summit thread ?
Sorry, I didn’t mean to assume the conclusion. Rather than do a disservice to the arguments with a hastily written reply, I’m going to cop out of the responsibility of providing a rigorous technical analysis and just share some thoughts. From what I’ve seen of your posts, your arguments were that the current nominally x-risk-reducing organizations (primarily FHI and SIAI) aren’t up to snuff when it comes to actually saving the world (in the case of SIAI perhaps even being actively harmful). Despite and because of being involved with SIAI I share some of your misgivings. That said, I personally think that SIAI is net-beneficial for their cause of promoting clear and accurate thinking about the Singularity, and that the PR issues you cite regarding Eliezer will be negligible in 5-10 years when more academics start speaking out publically about Singularity issues, which will only happen if SIAI stays around, gets funding, keeps on writing papers, and promotes the pretty-successful Singularity Summits. Also, I never saw you mention that SIAI is actively working on the research problems of building a Friendly artificial intelligence. Indeed, in a few years, SIAI will have begun the endeavor of building FAI in earnest, after Eliezer writes his book on rationality (which will also likely almost totally outshine any of his previous PR mistakes). It’s difficult to hire the very best FAI researchers without money, and SIAI doesn’t have money without donations.
Now, perhaps you are skeptical that FAI or even AGI could be developed by a team of the most brilliant AI researchers within the next, say, 20 years. That skepticism is merited and to be honest I have little (but still a non-trivial amount of knowledge) to go on besides the subjective impressions of those who work on the problem. I do however have strong arguments that there is a ticking clock till AGI, with the clock binging before 2050. I can’t give those arguments here, and indeed it would be against protocol to do so, as this is Less Wrong and not SIAI’s forum (despite it being unfortunately treated as such a few times in the past). Hopefully at some point someone, at SIAI or no, will write up such an analysis: currently Steve Rayhawk and Peter de Blanc of SIAI are doing a literature search that will with luck end up in a paper of the current state of AGI development, or at least some kind of analysis besides “Trust us, we’re very rational”.
All that said, my impression is that SIAI is doing good of the kind that completely outweighs e.g. aid to Africa if you’re using any kind of utilitarian calculus. And if you’re not using anything like utilitarian calculus, then why are you giving aid to Africa and not e.g. kittens? FHI also seems to be doing good, academically respectable, and necessary research on a rather limited budget. So if you’re going to donate money, I would first vote SIAI, and then FHI, but I can understand the position of “I’m going to hold onto my money until I have a better picture of what’s really important and who the big players are.” I can’t, however, understand the position of those who would give aid to Africa besides assuming some sort of irrationality or ignorance. But I will read over your post on the matter and see if anything there changes my mind.
•As I said, I cut my planned sequence of postings on SIAI short. There’s more that I would have liked to say and more that I hope to say in the future. For now I’m focusing on finishing my thesis.
•An important point that did not come across in my postings is that I’m skeptical of philanthropic projects having a positive impact on what they’re trying to do in general (independently of relation to existential risk). One major influence here has been my personal experience with public institutions. Another major influence has been reading the GiveWell blog. See for example GiveWell’s page on Social Programs That Just Don’t Work. At present I think that it’s a highly nonobvious but important fact that those projects which superficially look to be promising and which are not well-grounded by constant feedback from outsiders almost always fail to have any nontrivial impact on the relevant cause.
•On the subject of a proposed project inadvertently doing more harm than good, see the last few paragraphs of the GiveWell post titled Against Promise Neighborhoods. Consideration of counterfactuals is very tricky and very smart people often get it wrong.
•Quite possibly SIAI is having a positive holistic impact - I don’t have confidence that this is so, the situation is just that I don’t have enough information to judge from the outside.
•Regarding the time line for AGI and the feasibility of FAI research, see my back and forth with Tim Tyler here.
•My thinking as to what the most important causes to focus at present are is very much in flux. I welcome any information that you or others can point me to.
•My reasons for supporting developing world aid in particular at present are various and nuanced and I haven’t yet had the time to write out a detailed explanation that’s ready for public consumption. Feel free to PM me with your email address if you’d like to correspond.
An important point that did not come across in my postings is that I’m skeptical of philanthropic projects having a positive impact on what they’re trying to do in general (independently of relation to existential risk). One major influence here has been my personal experience with public institutions. Another major influence has been reading the GiveWell blog. See for example GiveWell’s page on Social Programs That Just Don’t Work. At present I think that it’s a highly nonobvious but important fact that those projects which superficially look to be promising and which are not well-grounded by constant feedback from outsiders almost always fail to have any nontrivial impact on the relevant cause.
If you had a post on this specifically planned then I would be interested in reading it!
I personally think that SIAI is net-beneficial for their cause of promoting clear and accurate thinking about the Singularity [...]
Is that what they are doing?!?
They seem to be funded by promoting the idea that DOOM is SOON—and that to avert it we should all be sending our hard-earned dollars to their intrepid band of Friendly Folk.
One might naively expect such an organisation would typically act so as to exaggerate the risks—so as to increase the flow of donations. That seems pretty consistent with their actions to me.
From that perspective the organisation seems likely to be an unreliable guide to the facts of the matter—since they have glaringly-obvious vested interests.
They seem to be funded by promoting the idea that DOOM is SOON—and that to avert it we should all be sending our hard-earned dollars to their intrepid band of Friendly Folk.
Or, more realistically, the idea that DOOM has a CHANCE of happening any time between NOW and ONE HUNDRED YEARS FROM NOW but that small CHANCE has a large enough impact in EXPECTED UTILITY that we should really figure out more about the problem because someone, not necessarily SIAI might have to deal with the problem EVENTUALLY.
One might naively expect such an organization would typically act so as to exaggerate the risks—but SIAI doesn’t seem to be doing that so one’s naive expectations would be wrong. It’s amazing how people associate an aura of overconfidence coming from the philosophical positions of Eliezer with the actual confidence levels of the thinkers of SIAI. Seriously, where are these crazy claims about DOOM being SOON and that ELIEZER YUDKOWSKY is the MESSIAH? From something Eliezer wrote 10 years ago? The Singularity Institute is pretty damn reasonable. The journal and conference papers they write are pretty well grounded in sound and careful reasoning. But ha, who would read those? It’s not like it’d be a good idea to actually read an organization’s actual literary output before judging them based primarily on the perceived arrogance of one of their research fellows, that’d be stupid.
From that perspective the organisation seems likely to be an unreliable guide to the facts of the matter—since they have glaringly-obvious vested interests.
What vested interests? Money? Do you honestly think that the people at SIAI couldn’t get 5 times as much money by working elsewhere? Status? Do you honestly think that making a seemingly crazy far mode belief that pattern matches to doomsdayism part of your identity for little pay and lots of hard work is a good way of gaining status? Eliezer would take a large status hit if he admitted he was wrong about this whole seed AI thing. Michael Vassar would too. But everyone else? Really good thinkers like Anna Salamon and Carl Shulman and Steve Rayhawk who have proved here on Less Wrong that they have exceptionally strong rationality, and who are consistently more reasonable than they have any right to be? (Seriously, you could give Steve Rayhawk the most retarded argument ever and he’d find a way to turn it into a reasonable argument worth seriously addressing. These people take their epistemology seriously.)
Maybe people at SIAI are, you know, actually worried about the problems because they know how to take ideas seriously instead of using the absurdity heuristic and personal distaste for Eliezer and then rationalizing their easy beliefs with vague outside view reference class tennis games or stupid things like that.
I like reading Multifoliaterose’s posts. He raises interesting points, even if I think they’re generally unfair. I can tell that he’s at least using his brain. When most people criticize SIAI (really Eliezer, but it’s easier to say SIAI ‘cuz it feels less personal), they don’t use any parts of their brain besides the ‘rationalize reason for not associating with low status group’ cognitive module.
timtyler, this comment isn’t really a direct reply to yours so much as a venting of general frustrations. But I get annoyed by the attitude of ‘haha let’s be cynical and assume the worst of the people that are actually trying their hardest to do the most good they can for the world’. Carl Shulman would never write a reply anything like the one I’ve written. Carl Shulman is always reasonable and charitable. And I know Carl Shulman works incredibly hard on being reasonable, and taking into account opposing viewpoints, and not letting his affiliation with SIAI cloud his thinking, and still doing lots of good, reasonable, solid work on explaining the problem of Friendliness to the academic sphere in reasonable, solid journal articles and conference papers.
It’s really annoying to me to have that go completely ignored just because someone wants to signal their oh-so-metacontrarian beliefs about SIAI. Use epistemic hygiene. Think before you signal. Don’t judge an entire organization’s merit off of stupid outside view comparisons without actually reading the material. Take the time to really update on the beliefs of longtime x-rationalists that have probably thought about this a lot more than you have. If you really think it through and still disagree, you should have stronger and more elegant counterarguments than things like “they have glaringly-obvious vested interests”. Yeah, as if that didn’t apply to anyone, especially anyone who thinks that we’re in great danger and should do something about it. They have pretty obvious vested interests in telling people about said danger. Great hypothesis there chap. Great way to rationalize your desire to signal and do what is easy and what appeals to your vanity. Care to list your true rejections?
And if you think that I am being uncharitable in my interpretation of your true motivations, then be sure to notice the symmetry.
‘haha let’s be cynical and assume the worst of the people that are actually trying their hardest to do the most good they can for the world’.
I hope I don’t come across as thinking “the worst” about those involved. I expect they are all very nice and sincere. By way of comparison, not all cults have deliberately exploitative ringleaders.
One might naively expect such an organization would typically act so as to exaggerate the risks—but SIAI doesn’t seem to be doing that so one’s naive expectations would be wrong.
Really? Really? You actually think the level of DOOM is cold realism—and not a ploy to attract funding? Why do you think that? De Garis and Warwick were doing much the same kind of attention-seeking before the SIAI came along—DOOM is an old school of marketing in the field.
You encourage me to speculate about the motives of the individuals involved. While that might be fun, it doesn’t seem to matter much—the SIAI itself is evidently behaving as though it wants dollars, attention, and manpower—to help it meet its aims.
FWIW, I don’t see what I am saying as particularly “contrarian”. A lot of people would be pretty sceptical about the end of the world being nigh—or the idea that a bug might take over the world—or the idea that a bunch of saintly programmers will be the ones to save us all. Maybe contrary to the ideas of the true believers—if that is what you mean.
Anyway, the basic point is that if you are interested in DOOM, or p(DOOM), consulting a DOOM-mongering organisation, that wants your dollars to help them SAVE THE WORLD may not be your best move. The “follow the money” principle is simple—and often produces good results.
FWIW, I don’t see what I am saying as particularly “contrarian”. A lot of people would be pretty sceptical about the end of the world being nigh—or the idea that a bug might take over the world—or the idea that a bunch of saintly programmers will be the ones to save us all. Maybe contrary to the ideas of the true believers—if that is what you mean.
Right, I said metacontrarian. Although most LW people seem SIAI-agnostic, a lot of the most vocal and most experienced posters are pro-SIAI or SIAI-related, so LW comes across as having a generally pro-SIAI attitude, which is a traditionally contrarian attitude. Thus going against the contrarian status quo is metacontrarian.
You encourage me to speculate about the motives of the individuals involved. While that might be fun, it doesn’t seem to matter much—the SIAI itself is evidently behaving as though it wants dollars, attention, and manpower—to help it meet its aims.
I’m confused. Anyone trying to accomplish anything is going to try to get dollars, attention, and manpower. I’m confused as to how this is relevant to the merit of SIAI’s purpose. SIAI’s never claimed to be fundamentally opposed to having resources. Can you expand on this?
I hope I don’t come across as thinking “the worst” about those involved. I expect they are all very nice and sincere. By way of comparison, not all cults have deliberately exploitative ringleaders.
What makes that comparison spring to mind? Everyone is incredibly critical of Eliezer, probably much more so than he deserves, because everyone is racing to be first to establish their non-cult-victim status. Everyone at SIAI has different beliefs about the relative merits of different strategies for successful FAI development. That isn’t a good thing—fractured strategy is never good—but it is evidence against cultishness. SIAI grounds its predictions in clear and careful epistemology. SIAI publishes in academic journals, attends scientific conferences, and hosts the Singularity Summit, where tons of prominent high status folk show up to speak about Singularity-related issues. Why is cult your choice of reference class? It is no more a cult than a typical global warming awareness organization. It’s just that ‘science fiction’ is a low status literary genre in modern liberal society.
Everyone is incredibly critical of Eliezer, probably much more so than he deserves, because everyone is racing to be first to establish their non-cult-victim status.
I don’t know about anybody else, but I am somewhat disturbed by Eliezer’s persistent use of hyphens in place of em dashes, and am very concerned that it could be hurting SIAI’s image.
And I say the same about his use of double spacing. It’s an outdated and unprofessional practice. In fact, Anna Salamon and Louie Helm are 2 other SIAI folk that engage in this abysmal writing style, and for that reason I’ve often been tempted to write them off entirely. They’re obviously not cognizant of the writing style of modern academic thinkers. The implications are obvious.
Everyone is incredibly critical of Eliezer, probably much more so than he deserves, because everyone is racing to be first to establish their non-cult-victim status.
Another reason that I suspect is more important than trying to signal non-cult-victim status is that people who do want to be considered part of the cult believe that the cause is important and believe that Eliezer’s mistakes could destroy the world (for example).
Anyone trying to accomplish anything is going to try to get dollars, attention, and manpower. I’m confused as to how this is relevant to the merit of SIAI’s purpose.
To recap, the SIAI is funded by donations from those who think that they will help prevent the end of the world at the hands of intelligent machines. For this pitch to work, the world must be at risk—in order for them to be able to save it. The SIAI face some resistance over this point, and these days, much of their output is oriented towards convincing others that these may be the end days. Also there will be a selection bias, with those most convinced of a high p(DOOM) most likely to be involved. Like I said, not necessarily the type of organisation one would want to approach if seeking the facts of the matter.
You pretend to fail to see connections between the SIAI and an END OF THE WORLD cult—but it isn’t a terribly convincing act.
You pretend to fail to see connections between the SIAI and an END OF THE WORLD cult—but it isn’t a terribly convincing act.
No, I see it, look further, and find the model lacking in explanatory power. It selectively leaves out all kinds of useful information that I can use to control my anticipations.
Hmuh, I guess we won’t be able to make progress, ’cuz I pretty much wholeheartedly agree with Vladimir when he says:
This whole “outside view” methodology, where you insist on arguing from ignorance even where you have additional knowledge, is insane (outside of avoiding the specific biases such as planning fallacy induced by making additional detail available to your mind, where you indirectly benefit from basing your decision on ignorance).
and Nick Tarleton when he says:
We all already know about this pattern match. Its reiteration is boring and detracts from the conversation.
No, I see it, look further, and find the model lacking in explanatory power. It selectively leaves out all kinds of useful information that I can use to control my anticipations.
all of their predictions of the end of the world were complete failures.
If they weren’t, we wouldn’t be here to see the failure.
It therefore seems to me that using this to “disprove” an end-of-the-world claim makes as much sense as someone trying to support a theory by saying, “They laughed at Galileo, too!”
IOW, you are simply placing the prediction in a certain outside-view class, without any particular justification. You could just as easily put SIAI claims in the class of “predictions of disaster that were averted by hard work”, and with equal justification. (i.e., none that you’ve given!)
[Note: this comment is neither pro-SIAI nor anti-SIAI, nor any comment on the probability of their claims being in any particular class. I’m merely anti-arguments-that-are-information-free. ;-) ]
The argument is not information free. It is just lower on information than implied. If people had never previously made predictions of disaster and everything else was equal then that tells us a different thing than if humans predicted disaster every day. This is even after considering selection effects. I believe this applies somewhat even considering the possibility of dust.
Uh, it wasn’t given as an “argument” in the first place. Evidence which does more strongly relate to p(DOOM) includes the extent to which we look back and see the ashes of previous failed technological civilisations, and past major mishaps. I go into all this in my DOOM video.
No, wait, there’s still something I just don’t understand. In a lot of your comments it seems you do a good job of analyzing the responses of ‘normal people’ to existential risks: they’re really more interested in lipstick, food, and sex, et cetera. And I’m with you there, evolution hasn’t hardwired us with a ‘care about low probabilities of catastrophe’ desire; the problem wasn’t really relevant in the EEA, relatively speaking.
But then it seems like you turn around and do this weird ‘ought-from-is’ operation from evolution and ‘normal people’ to how you should engage in epistemic rationality, and that’s where I completely lose you. It’s like you’re using two separate but to me equally crazy ought-from-is heuristics. The first goes like ‘Evolution didn’t hard code me with a desire to save the world, I guess I don’t actually really want to save the world then.’ And the second one is weirder and goes more like ‘Oh, well, evolution didn’t directly code good epistemology into my brain, it just gave me this comparatively horrible analogical reasoning module; I guess I don’t really want good epistemic rationality then’.
It ends up looking like you’re using some sort of insane bizarre sister of the outside view that no one can relate with.
It’s like you’re perfectly describing the errors in most peoples’ thinking but then at the end right when you should say “Haha, those fools”, you instead completely swerve and endorse the errors, then righteously champion them for (evolutionary psychological?) reasons no one can understand.
“‘Oh, well, evolution didn’t directly code good epistemology into my brain, it just gave me this comparatively horrible analogical reasoning module; I guess I don’t really want good epistemic rationality then’.”
...looks like it bears very little resemblance to anything I have ever said. I don’t know where you are getting it from.
Perhaps it is to do with the idea that not caring about THE END OF THE WORLD is normally a rational action for a typical gene-propagating agent.
Such agents should normally be concerned with having more babies than their neighbours do—and should not indulge in much paranoia about THE END OF THE WORLD. That is not sticking with poor quality cognition, it is often the correct thing to do for an agent with those aims.
If p(DOOM) gets really large, the correct strategy might change. If it turns into a collective action problem with punishment for free riders, the correct strategy might change. However, often THE END OF THE WORLD can be rationally perceived to be someone else’s problem. Expending resources fighting DOOM usually just means you get gradually squeezed out of the gene pool.
The DOOM enthusiasts typically base their arguments on utilitarianism. A biologist’s perspective on that is that it is sometimes an attempt to signal unselfishness—albeit usually a rather unbelievable one—and sometimes an attempt to manipulate others into parting withe their cash.
...looks like it bears very little resemblance to anything I have ever said. I don’t know where you are getting it from.
Looking back I think I read more into your comments than was really there; I apologize.
Such agents should normally be concerned with having more babies than their neighbours do—and should not indulge in much paranoia about THE END OF THE WORLD. That is not sticking with poor quality cognition, it is often the correct thing to do for an agent with those aims.
I agree here. The debate is over whether or not the current situation is normal.
However, often THE END OF THE WORLD can be rationally perceived to be someone else’s problem.
Tentatively agreed. Normally, even if nanotech’s gonna kill everyone, you’re not able to do much about it anyway. But I’m not sure why you bring up “Expending resources fighting DOOM usually just means you get gradually squeezed out of the gene pool.” when most people aren’t at all trying to optimize the amount of copies of their genes in the gene pool.
The DOOM enthusiasts typically base their arguments on utilitarianism. A biologist’s perspective on that is that it is sometimes an attempt to signal unselfishness—albeit usually a rather unbelievable one—and sometimes an attempt to manipulate others into parting withe their cash.
Generally this is true, especially before science was around to make such meme pushing low status. But it’s also very true of global warming paranoia, which is high status even among intellectuals for some reason. (I should probably try to figure out why.) I readily admit that certain values of outside view will jump from that to ‘and so all possible DOOM-pushing groups are just trying to signal altruism or swindle people’—but rationality should help you win, and a sufficiently good rationalist should trust themselves to try and beat the outside view here.
So maybe instead of saying ‘poor epistemology’ I should say ‘odd emphasis on outside view when generally people trust their epistemology better than that beyond a certain point of perceived rationality in themselves’.
The primary thing I find objectionable about your commenting on this subject is the persistent violation of ordinary LW etiquette, e.g. by REPEATEDLY SHOUTING IN ALL CAPS and using ad hominem insults, e.g. “groupies.”
I’m sorry to hear about your issues with my writing style :-(
I have been consistently capitalising DOOM—and a few related terms—for quite a while. I believe these terms deserve special treatment—in accordance with how important everybody says they are—and ALL-CAPS is the most portable form of emphasis across multiple sites and environments. For the intended pronunciation of phrases like DOOM, SOON, see my DOOM video. It is not shouting. I rate the effect as having net positive value in the context of the intended message—and will put up with your gripes about it.
As for “groupies”—that does seem like an apt term to me. There is the charismatic leader—and then there is his fan base—which seems to have a substantial element of young lads. Few other terms pin down the intended meaning as neatly. I suppose I could have said “young fan base”—if I was trying harder to avoid the possibility of causing offense. Alas, I am poorly motivated to bother with such things. Most of the “insiders” are probably going to hate me anyway—because of my message—and the “us” and “them” tribal mentality.
Did you similarly give Yudkowsky a public ticking-off when he recently delved into the realm of BOLD ALL CAPS combined with ad-hominen insults? His emphasis extended to whole paragraphs—and his insults were considerably more personal—as I recall. Or am I getting special treatment?
I have been consistently capitalising DOOM—and a few related terms—for quite a while. I believe these terms deserve special treatment—in accordance with how important everybody says they are—and all-caps is the most portable form of emphasis across multiple sites and environments.
May I suggest as a matter of style that “Doom” more accurately represents your intended meaning of specific treatment and usage as a noun that isn’t just a description? Since ALL CAPS has the interpretation of mere shouting you fail to communicate your meaning effectively if you use all caps instead of Title Case in this instance. Consider ‘End Of The World’ as a superior option.
Did you similarly give Yudkowsky a public ticking-off when he recently delved into the realm of BOLD ALL CAPS combined with ad-hominen insults? His emphasis extended to whole paragraphs—and his insults were considerably more personal—as I recall. Or am I getting special treatment?
Let’s be honest. If we’re going to consider that incident as an admissible tu quoque to any Yudkowskian then we could justify just about any instance of obnoxious social behaviour thereby. I didn’t object to your comments here simply because I didn’t consider them out of line on their own merits. I would have no qualms about criticising actual bad behaviour just because Eliezer acted like a douche.
Mind you I am not CarlShuman and the relevance of hypocrisy to Carl’s attempt of a status slap is far greater than if it was an attempt by me. Even so you could replace “Or am I getting special treatment?” with “Or are you giving me special treatment?” and so reduce the extent that you signal that it is ok to alienate or marginalise you.
Title Caps would be good too—though “DOOM” fairly often appears at the start of a sentence—and there it would be completely invisible. “Doom” is milder. Maybe “DOOM” is too much—but I can live with it. After all, this is THE END OF THE WORLD we are talking about!!! That is pretty ###### important!!!
If you check with the THE END IS NIGH placards, they are practically all in ALL CAPS. I figure those folk are the experts in this area—and that by following their traditions, I am utilizing their ancient knowledge and wisdom on the topic of how best to get this critical message out.
A little shouting may help ensure that the DOOM message reaches distant friends and loved ones...
A little shouting may help ensure that the DOOM message reaches distant friends and loved ones...
Or utterly ignored because people think you’re being a tool. One or the other. (I note that this is an unfortunate outcome because apart from this kind of pointless contrariness people are more likely to acknowledge what seem to be valid points in your response to Carl. I don’t like seeing the conversational high ground going to those who haven’t particularly earned it in the context.)
Well, my CAPS are essentially a parody. If the jester capers in the same manner as the noble, there will often be some people who will think that he is dancing badly—and not understand what is going on.
You ignored the word ‘repetitive.’ As you say, you have a continuing policy of carelessness towards causing offense, i.e. rudeness. And no, I don’t think that the comment you mention was appropriate either (versus off-LW communication), but given that it was deleted I didn’t see reason to make a further post about it elsewhere. Here are some recent comment threads in which I called out Eliezer and others for ad hominem attacks.
...not as much as you ignored the words “consistently” and “for quite a while”.
I do say what I mean. For instance, right now you are causing me irritation—by apparently pointlessly wasting my time and trying to drag me into the gutter. On the one hand, thanks for bothering with feedback… …but on the other, please go away now, Carl—and try to find something more useful to do than bickering here with me.
I don’t think it’s that. I think it’s just annoyance at perceived persistently bad epistemology in people making the comparison over and over again as if each iteration presented novel predictions with which to constrain anticipation.
Everyone knows the analogy exists. It’s just a matter of looking at the details to see if that has any bearing on whether or not SIAI is a useful organization or not.
You asked: “What makes that comparison spring to mind?” when I mentioned cults.
Hopefully, you now have your answer—for one thing, they are like an END OF THE WORLD cult—in that they use fear of THE END OF THE WORLD as a publicity and marketing tool.
Such marketing has a long tradition behind it—e.g see the Daisy Ad.
Tyler: If there really is “bad epistemology”, feel free to show where.
Nesov: Also, FOOM rhymes with DOOM. There!
And this response was upvoted … why? This is supposed to be a site where rational discourse is promoted, not a place like Pharyngula or talk.origins where folks who disagree with the local collective worldview get mocked by insiders who then congratulate each other on their cleverness.
I voted it up. It was short, neat, and made several points.
Probably the main claim is that that the relationship between the SIAI and previous END OF THE WORLD outfits is a meaningless surface resemblance.
My take of the issue is that DOOM is—in part—a contagious mind-virus, with ancient roots—which certain “vulnerable” people are inclined to spread around—regardless of whether it makes much sense or not.
With the rise of modern DOOM “outfits”, we need to understand the sociological and memetic aspects of these things all the more:
Will we see more cases of “DOOM exploitation”—from those out to convert fear of the imminent end into power, wealth, fame or sex?
Will a paranoid society take steps to avoid the risks? Will it freeze like a rabbit in the headlights? Or will it result in more looting and rape cases?
What is the typical life trajectory of those who get involved with these outfits? Do they go on to become productive members of society? Or do they wind up having nightmares about THE END OF THE WORLD—while neglecting their interpersonal relationships and personal hygene—unless their friends and family stage an “intervention”?
...and so on.
Rational agents should understand the extent to which they are infected by contagious mind viruses—that spread for their own benefit and without concern for the welfare of their hosts. DOOM definitely has the form of such a virus. The issue as I see it is: how much of the observed phenomenon of the of modern-day DOOM “outfits” does it explain?
To study this whole issue, previous doomsday cults seem like obvious and highly-relevant data points to me. In some cases their DOOM was evidently a complete fabrication. They provide pure examples of fake DOOM—exactly the type of material a sociologist would need to understand that aspect of the DOOM-mongering phenomeon.
I agree that it’s annoying when people are mocked for saying something they didn’t say. But Nesov was actually making an implicit argument here, not just having fun: he was pointing out that timtyler’s analogies tend to be surface-level and insubstantive. The kind of thing that I’ve seen on Pharyngula are instead unjustified ad hominem attacks that don’t shed any light on possible flaws in the poster’s arguments. That said, I think Nesov’s comment was flirting with the line.
“Way past that” meaning “so exasperated with Tim that rational discourse seems just not worth it”? Hey, I can sympathize. Been there, done that.
But still, it annoys me when people are attacked by mocking something that they didn’t say, but that their caricature should have said (in a more amusing branch of reality).
It annoys me more when that behavior is applauded.
And it strikes me as deeply ironic when it happens here.
But still, it annoys me when people are attacked by mocking something that they didn’t say, but that their caricature should have said (in a more amusing branch of reality)
That’s very neatly put.
I’m not dead certain it’s a fair description of Vladimir Nesov said, but describes a lot of behavior I’ve seen. And there’s a parallel version about the branches of reality which allow for easier superiority and/or more outrage.
The error Tim makes time and again is finding shallow analogies between activity of people concerned with existential risk and doomsday cults, and loudly announcing them, lamenting that it’s not proper that this important information is so rarely considered. Yet the analogies are obvious and obviously irrelevant. My caricature simply followed the pattern.
Talking about obviousness as if it was inherent in a conclusion is typical mind projection fallacy. What it generally implies (and what I think you mean) is that any sufficiently rational person would see it; but when lots of people don’t see it, calling it obvious is against social convention (it’s claiming higher rationality and thus social status than your audience). In this case I think that to your average reader the analogies aren’t obviously irrelevant, even though I personally do find them obviously irrelevant.
When you’re trying to argue that something is the case (ie. that the analogies are irrelevant) the difference between what you are arguing being OBVIOUS and it merely being POSSIBLE is extremely vast.
You made a claim that they were obviously irrelevant.
The respondant expressed uncerainty as to their irrelevance “They may be irrelevant.” as opposed to the certainty in “The analogies are obvious.” and “They are not obviously irrelevant.”
That is a distinction between something being claimed as obvious and the same thing being seen as doubtful.
If you do not wish to explain a point there are many better options* than inaccurately calling it obvious. For example, linking to a previous explanation.
*in rationality terms. In argumentation terms, these techniques are often inferior to the technique of the emperor’s tailors
The error Tim makes time and again is finding shallow analogies between activity of people concerned with existential risk and doomsday cults, and loudly announcing them, lamenting that it’s not proper that this important information is so rarely considered. Yet the analogies are obvious and obviously irrelevant.
Uh, they are not “obviously irrelevant”. The SIAI behaves a bit like other DOOM-mongering organisations have done—and a bit like other FUD marketing organisations have done.
Understanding the level of vulnerability of the human psyche to the DOOM virus is a pretty critical part of assessing what level of paranoia about the topic is reasonable.
It is, in fact very easy to imagine how a bunch of intrepid “friendly folk” who think they are out to save the world—might—in the service of their cause—exaggerate the risks, in the hope of getting attention, help and funds.
Indeed, such an organisation is most likely to be founded by those who have extreme views about the risks, attract others who share similar extreme views, and then have a hard time convincing the rest of the world that they are, in fact, correct.
There are sociological and memetic explanations for the “THE END IS NIGH” phenomenon that are more-or-less independent of the actual value of p(DOOM). I think these should be studied more, and applied to this case—so that we can better see what is left over.
There has been some existing study of DOOM-mongering. There is also the associated Messiah complex—an intense desire to save others. With the rise of the modern doomsday “outfits”, I think more study of these phenomenon is warranted.
Sometimes it is fear that is the mind-killer. FUD marketing exploits this to help part marks from their money. THE END OF THE WORLD is big and scary—a fear superstimulus—and there is a long tradition of using it to move power around and achieve personal ends—and the phenomena spreads around virally.
I appreciate that this will probably turn the stomachs of the faithful—but without even exploring the issue, you can’t competently defend the community against such an analysis—because you don’t know to what extent it is true—because you haven’t even looked into it.
Everyone is incredibly critical of Eliezer, probably much more so than he deserves, because everyone is racing to be first to establish their non-cult-victim status.
Another reason that I suspect is more important than trying to signal non-cult-victim status is that people who do want to be considered part of the cult believe that the cause is important and believe that Eliezer’s mistakes could destroy the world (for example).
I didn’t say anyone was “racing to be first to establish their non-cult-victim status”—but it is certainly a curious image! [deleted parent comment was a dupe].
Tim, do you think that nuclear-disarmament organizations were inherently flawed from the start because their aim was to prevent a catastrophic global nuclear war? Would you hold their claims to a much higher standard than the claims of organizations that looked to help smaller numbers of people here and now?
I recognize that there are relevant differences, but merely pattern-matching an organization’s conclusion about the scope of their problem, without addressing the quality of their intermediate reasoning, isn’t sufficient reason to discount their rationality.
I also think you see yourself as trying to help SIAI see how they look to “average joe” potential collaborators or contributors, while Will sees your criticisms as actually calling into question the motives, competence, and ingenuity of SIAI’s staff. If I’m right, you’re talking at cross-purposes.
I also think you see yourself as trying to help SIAI see how they look to “average joe” potential collaborators or contributors
Reforming the SIAI is a possibility—but not a terribly realistic one, IMO. So, my intended audience here is less that organisation, and more some of the individuals here who I share interests with.
Oh, that might be. Other comments by timtyler seemed really vague but generally anti-SIAI (I hate to set it up as if you could be for or against a set of related propositions in memespace, but it’s natural to do here, meh), so I assumed he was expressing his own beliefs, and not a hypothetical average joe’s.
This is an incredibly anti-name-calling community. People ascribe a lot of value to having “good” discussions (disagreement is common, but not adversarialism or ad hominems.) LW folks really don’t like being called a cult.
SIAI isn’t a cult, and Eliezer isn’t a cult leader, and I’m sure you know that your insinuations don’t correspond to literal fact, and that this organization is no more a scam than a variety of other charitable and advocacy organizations.
I do think that folks around here are over-sensitive to normal levels of name-calling and ad hominems. It’s odd. Holding yourself above the fray comes across as a little snobbish. There’s a whole world of discourse out there, people gathering evidence and exchanging opinions, and the vast majority of them are doing it like this: UR A FASCIST. But do you think there’s therefore nothing to learn from them?
Why is giving money to x-risk charities conducive to saving the world?
I think the reasoning goes something like:
Existential risks are things that could destroy the world as we know it.
Existential risk charities work to reduce such risks.
Existential risk charities use donations to perform said task
Giving to x-risk charities is conducive to saving the world.
Before looking at evidence for or against the effectiveness of particular x-risk charities our prior expectation should be that people who dedicate themselves to doing something are more likely to contribute progress towards that goal than to sabotage it.
This is only true if it is the case that the first-order effect of legalizing drugs (legality would encourage more people to take them) outweighs second order effects. An example of the second order effects is the fact that the price is higher encourages production and distribution. Or the fact the that illegality allows them to be used as signals of rebellion. Legalizing drugs would potentially put distribution in the hands of more responsible people.And so forth.
As the evidence based altruism people have found, improving the world is a lot harder than it looks.
If I failed to notice that there are scientifically proven genetic differences I would be missing a far more important part of reality (evolutionary psychology and the huge effects of evolution in the last 20,000 years) than if I failed to notice that being a bigot was bad and impeded moral progress.
I actually disagree with this statement outright. First of all, ignoring the existence of a specific piece of evidence is not the same as being wholly ignorant of the workings of evolution. Second, I think that the use or abuse of data (false or true) leading to the mistreatment of humans is a worse outcome than the ignorance of said data. Science isn’t a goal in and of itself—it’s a tool, a process invented for the betterment of humanity. It accomplishes that admirably, better than any other tool we’ve applied to the same problems. If the use of the tool, or in this case one particular end of the tool, causes harm, perhaps it’s better to use another end (a different area of science than genetics), or the same one in a different environment (in a time and place where racial inequality and bias are not so heated and widespread—our future, if we’re lucky). Otherwise, we’re making the purpose of the tool subservient to the use of the tool for its own sake—pounding nails into the coffee table.
Besides—anecdotally, people who think that the genetic differences between races are important incite less violence than people who think that not being a bigot is important. If, as you posited, one had to choose. ;)
I have a couple other objections (really? sex discrimination is over? where was I?) but other people have covered them satisfactorily.
x-risk charities
New here; can I get a brief definition of this term? I’ve gotten the gist of what it means by following a couple of links, I just want to know where the x bit comes from. Didn’t find it on the site’s wiki or the internet at large.
Besides—anecdotally, people who think that the genetic differences between races are important incite less violence than people who think that not being a bigot is important.
I’m not sure what “what” would refer to here. I didn’t have an incident in mind, I’m just giving my impression of public perception (the first person gets called racist, and the second one gets called, well, normal, one hopes). It wasn’t meant to be taken very seriously.
Can you link to a Robin Hanson article on this topic so that people who aren’t already familiar with his opinions on this subject (read: LW newbies like me) know what this is about?
Or alternately, I propose this sequence:
regular medical care by default / alt-med / regular medical care because alt-med is unscientific
This is more in line with the other examples. I second the request for an edit. Yvain, you could add “Robin Hanson” to the fourth slot: it would kinda mess up your triplets, but with the justification that it’d be a funny example of just how awesomely contrarian Robin Hanson is. :D
Also, Yvain, you happen to list what people here would deem more-or-less correct contrarian clusters in your triplet examples. But I have no idea how often the meta-level contrarian position is actually correct, and I fear that I might get too much of a kick out of the positions you list in your triplets simply because my position is more meta and I associate metaness with truth when in reality it might be negatively correlated. Perhaps you could think of a few more-wrong meta-contrarian positions to balance what may be a small affective bias?
Huh? In all of those examples the unmentioned fourth level is correct and the second and third level both about equally useless.
Half-agree with you, as none of the 18 positions are ‘correct’, but I don’t know what you mean by ‘useless’. Instead of generalizing I’ll list my personal positions:
If I failed to notice that there are scientifically proven genetic differences I would be missing a far more important part of reality (evolutionary psychology and the huge effects of evolution in the last 20,000 years) than if I failed to notice that being a bigot was bad and impeded moral progress. That said, if most people took this position, it’d result in a horrible tragedy of the commons situation, which is why most social scientists cooperate on the ‘let’s not promote racism’ dilemma. I’m not a social scientist so I get to defect and study some of the more interesting aspects of human evolutionary biology.
No opinion. Women seem to be doing perfectly fine. Men seem to get screwed over by divorce laws and the like. Tentatively agree more with third level but hey, I’m pretty ignorant here.
What can I say, it’s politics. Libertarians in charge would mean more drugs and ethically questionable experiments of the sort I promote, as well as a lot more focus on the risks and benefits of technology. Since the Singularity trumps everything else policy-wise I have to root for the libertarian team here, even if I find them obnoxiously pretentious. (ETA: Actually, maybe more libertarians would just make it more likely that the ‘Yeah yeah Singularity AI transhumanism wooooo!’ meme would get bigger which would increase existential risk. So uh… never mind, I dunno.)
Too ignorant to comment. My oxycodone and antiobiotics sure did me good when I got an infection a week ago. My dermatologist drugs didn’t help much with my acne. I’ve gotten a few small surgeries which made me better. Overall conventional medicine seems to have helped me a fair bit and costs me little. I don’t even know what Robin Hanson’s claims are, though. A link would be great.
Okay, anyone who cares about helping people in Africa and can multiply should be giving their money to x-risk charities. Because saving the world also includes saving Africa. Therefore position 3 is essentially correct, but maybe it’s really position 4 (give aid to Earth) that’s the correct one, I dunno.
Um, Patri was just being silly. Obama is obviously not a Muslim in any meaningful sense.
In conclusion, I think that there isn’t any real trend here, but maybe we’re just disputing ways of carving up usefulness? It is subjective after all.
Added: Explanations for downvotes are always welcome. Lately I’ve decided to try less to appear impressive and consistently rational (like Carl Shulman) and try more to throw lots of ideas around for critique, criticism, and development (like Michael Vassar). So although downvotes are useful indicators of where I might have gone wrong, a quick explanatory comment is even more useful and very unlikely to be responded to with indignation or hostility.
My comment was largely tongue in cheek, but:
Awareness of genetic differences between races constitutes negative knowledge in many cases, that is it leads to anticipations that match the outcomes more badly than they would have otherwise. If everyone suspects that genetically blue-haired people are slightly less intelligent on average for genetic reasons, you want to hire the most intelligent person for a job and after a very long selection process (that other people were involved in) and you are left with two otherwise equally good candidates one blue-haired and one not, the egoistically rational thing is not to pick the non-blue haired person on account of that genetic difference. The other evidence on their intelligence is not independent of the genetic factors that correlate with blue hair, so any such genetic disadvantages are already figured in. If anything you should pick the blue haired person because extreme sample selection bias is likely and any blue haired person still left at the end of the selection process needed to be very intelligent to still be in the race. (so no, this isn’t a tragedy of the commons situation)
It’s pretty much never going to be the case that the blue hair is your best information on someone’s intelligence, even their clothes or style of speech should usually be a better source.
Even for groups “genetic differences” can be pretty misleading, tallness is a strongly heritable trait and nevertheless differences in tallness can easily be dominated by environmental factors.
Depends on what is meant with womens and mens right movement, really. The fact that men are treated unfairly on some issues does not mean that we have overshot in treating women fairly, weighting these off against each other is not productive and everyone should be treated fairly irrespective of gender and other factors, but since unfair treatment due to gender is still existent tracking how treatment varies by gender may still be necessary, though differences in outcome don’t automatically imply unfairness, only that it’s a hypothesis that deserves to be considered.
(Not mentioning tragedy of the commons since non-crazy Libertarians usually agree that government of some level is necessary for those) Government competence vs. private sector competence is a function of organization size, productive selective pressures, culture etc. and even though the private sector has some natural advantages it doesn’t dominate universally, particularly where functioning markets are difficult to set up (e. g. high speed railway lines). Regulation may be necessary to break out of some Nash equilibriums, and to overcome momentum in some cases (e. g. thermal insulation in building codes, though there should be ways to receive exemptions when sensible). I also don’t see some level of wealth distribution as inherently evil.
http://hanson.gmu.edu/EC496/Sources/sources.html
Basically: No evidence marginal heath spending improves health and some evidence against, cut US health spending in half. IMO the most sensible approach would be single payer universal health care for everything that is known to be effective and allowing people to purchase anything safe beyond that.
I understood “don’t give aid to Africa” as “don’t give aid to Africa because it’s counterproductive”, which depends on the type of giving, so I would read your position as a position 4.
Ok, useless is the wrong word here for position 2, but position 4 would be that it shouldn’t even matter whether he is a Muslim, because there is nothing wrong with being a Muslim in the first place (other than being a theist).
But… but… but saving the world doesn’t signal the same affiliations as saving Africa!
On LW, it signals better affiliations!
My impression is that Hanson’s take on conventional medicine is that half the money spent is wasted. However, I don’t know if he’s been very specific about which half.
The RAND Health Experiment, which he frequently citied study didn’t investigate the benefits of catastrophic medical insurance or that which people pay for from their own pocket, and found the rest useless.
Why is giving money to x-risk charities conducive to saving the world? (I don’t necessarily disagree, but want to see what you have to say to substantiate your claim.) In particular, what’s your response to Holden’s comment #12 at the GiveWell Singularity Summit thread ?
Sorry, I didn’t mean to assume the conclusion. Rather than do a disservice to the arguments with a hastily written reply, I’m going to cop out of the responsibility of providing a rigorous technical analysis and just share some thoughts. From what I’ve seen of your posts, your arguments were that the current nominally x-risk-reducing organizations (primarily FHI and SIAI) aren’t up to snuff when it comes to actually saving the world (in the case of SIAI perhaps even being actively harmful). Despite and because of being involved with SIAI I share some of your misgivings. That said, I personally think that SIAI is net-beneficial for their cause of promoting clear and accurate thinking about the Singularity, and that the PR issues you cite regarding Eliezer will be negligible in 5-10 years when more academics start speaking out publically about Singularity issues, which will only happen if SIAI stays around, gets funding, keeps on writing papers, and promotes the pretty-successful Singularity Summits. Also, I never saw you mention that SIAI is actively working on the research problems of building a Friendly artificial intelligence. Indeed, in a few years, SIAI will have begun the endeavor of building FAI in earnest, after Eliezer writes his book on rationality (which will also likely almost totally outshine any of his previous PR mistakes). It’s difficult to hire the very best FAI researchers without money, and SIAI doesn’t have money without donations.
Now, perhaps you are skeptical that FAI or even AGI could be developed by a team of the most brilliant AI researchers within the next, say, 20 years. That skepticism is merited and to be honest I have little (but still a non-trivial amount of knowledge) to go on besides the subjective impressions of those who work on the problem. I do however have strong arguments that there is a ticking clock till AGI, with the clock binging before 2050. I can’t give those arguments here, and indeed it would be against protocol to do so, as this is Less Wrong and not SIAI’s forum (despite it being unfortunately treated as such a few times in the past). Hopefully at some point someone, at SIAI or no, will write up such an analysis: currently Steve Rayhawk and Peter de Blanc of SIAI are doing a literature search that will with luck end up in a paper of the current state of AGI development, or at least some kind of analysis besides “Trust us, we’re very rational”.
All that said, my impression is that SIAI is doing good of the kind that completely outweighs e.g. aid to Africa if you’re using any kind of utilitarian calculus. And if you’re not using anything like utilitarian calculus, then why are you giving aid to Africa and not e.g. kittens? FHI also seems to be doing good, academically respectable, and necessary research on a rather limited budget. So if you’re going to donate money, I would first vote SIAI, and then FHI, but I can understand the position of “I’m going to hold onto my money until I have a better picture of what’s really important and who the big players are.” I can’t, however, understand the position of those who would give aid to Africa besides assuming some sort of irrationality or ignorance. But I will read over your post on the matter and see if anything there changes my mind.
Reasonable response, upvoted :-).
•As I said, I cut my planned sequence of postings on SIAI short. There’s more that I would have liked to say and more that I hope to say in the future. For now I’m focusing on finishing my thesis.
•An important point that did not come across in my postings is that I’m skeptical of philanthropic projects having a positive impact on what they’re trying to do in general (independently of relation to existential risk). One major influence here has been my personal experience with public institutions. Another major influence has been reading the GiveWell blog. See for example GiveWell’s page on Social Programs That Just Don’t Work. At present I think that it’s a highly nonobvious but important fact that those projects which superficially look to be promising and which are not well-grounded by constant feedback from outsiders almost always fail to have any nontrivial impact on the relevant cause.
See the comment here by prase which I agree with.
•On the subject of a proposed project inadvertently doing more harm than good, see the last few paragraphs of the GiveWell post titled Against Promise Neighborhoods. Consideration of counterfactuals is very tricky and very smart people often get it wrong.
•Quite possibly SIAI is having a positive holistic impact - I don’t have confidence that this is so, the situation is just that I don’t have enough information to judge from the outside.
•Regarding the time line for AGI and the feasibility of FAI research, see my back and forth with Tim Tyler here.
•My thinking as to what the most important causes to focus at present are is very much in flux. I welcome any information that you or others can point me to.
•My reasons for supporting developing world aid in particular at present are various and nuanced and I haven’t yet had the time to write out a detailed explanation that’s ready for public consumption. Feel free to PM me with your email address if you’d like to correspond.
Thanks again for your thoughtful response.
If you had a post on this specifically planned then I would be interested in reading it!
Is that what they are doing?!?
They seem to be funded by promoting the idea that DOOM is SOON—and that to avert it we should all be sending our hard-earned dollars to their intrepid band of Friendly Folk.
One might naively expect such an organisation would typically act so as to exaggerate the risks—so as to increase the flow of donations. That seems pretty consistent with their actions to me.
From that perspective the organisation seems likely to be an unreliable guide to the facts of the matter—since they have glaringly-obvious vested interests.
/startrant
Or, more realistically, the idea that DOOM has a CHANCE of happening any time between NOW and ONE HUNDRED YEARS FROM NOW but that small CHANCE has a large enough impact in EXPECTED UTILITY that we should really figure out more about the problem because someone, not necessarily SIAI might have to deal with the problem EVENTUALLY.
One might naively expect such an organization would typically act so as to exaggerate the risks—but SIAI doesn’t seem to be doing that so one’s naive expectations would be wrong. It’s amazing how people associate an aura of overconfidence coming from the philosophical positions of Eliezer with the actual confidence levels of the thinkers of SIAI. Seriously, where are these crazy claims about DOOM being SOON and that ELIEZER YUDKOWSKY is the MESSIAH? From something Eliezer wrote 10 years ago? The Singularity Institute is pretty damn reasonable. The journal and conference papers they write are pretty well grounded in sound and careful reasoning. But ha, who would read those? It’s not like it’d be a good idea to actually read an organization’s actual literary output before judging them based primarily on the perceived arrogance of one of their research fellows, that’d be stupid.
What vested interests? Money? Do you honestly think that the people at SIAI couldn’t get 5 times as much money by working elsewhere? Status? Do you honestly think that making a seemingly crazy far mode belief that pattern matches to doomsdayism part of your identity for little pay and lots of hard work is a good way of gaining status? Eliezer would take a large status hit if he admitted he was wrong about this whole seed AI thing. Michael Vassar would too. But everyone else? Really good thinkers like Anna Salamon and Carl Shulman and Steve Rayhawk who have proved here on Less Wrong that they have exceptionally strong rationality, and who are consistently more reasonable than they have any right to be? (Seriously, you could give Steve Rayhawk the most retarded argument ever and he’d find a way to turn it into a reasonable argument worth seriously addressing. These people take their epistemology seriously.)
Maybe people at SIAI are, you know, actually worried about the problems because they know how to take ideas seriously instead of using the absurdity heuristic and personal distaste for Eliezer and then rationalizing their easy beliefs with vague outside view reference class tennis games or stupid things like that.
I like reading Multifoliaterose’s posts. He raises interesting points, even if I think they’re generally unfair. I can tell that he’s at least using his brain. When most people criticize SIAI (really Eliezer, but it’s easier to say SIAI ‘cuz it feels less personal), they don’t use any parts of their brain besides the ‘rationalize reason for not associating with low status group’ cognitive module.
timtyler, this comment isn’t really a direct reply to yours so much as a venting of general frustrations. But I get annoyed by the attitude of ‘haha let’s be cynical and assume the worst of the people that are actually trying their hardest to do the most good they can for the world’. Carl Shulman would never write a reply anything like the one I’ve written. Carl Shulman is always reasonable and charitable. And I know Carl Shulman works incredibly hard on being reasonable, and taking into account opposing viewpoints, and not letting his affiliation with SIAI cloud his thinking, and still doing lots of good, reasonable, solid work on explaining the problem of Friendliness to the academic sphere in reasonable, solid journal articles and conference papers.
It’s really annoying to me to have that go completely ignored just because someone wants to signal their oh-so-metacontrarian beliefs about SIAI. Use epistemic hygiene. Think before you signal. Don’t judge an entire organization’s merit off of stupid outside view comparisons without actually reading the material. Take the time to really update on the beliefs of longtime x-rationalists that have probably thought about this a lot more than you have. If you really think it through and still disagree, you should have stronger and more elegant counterarguments than things like “they have glaringly-obvious vested interests”. Yeah, as if that didn’t apply to anyone, especially anyone who thinks that we’re in great danger and should do something about it. They have pretty obvious vested interests in telling people about said danger. Great hypothesis there chap. Great way to rationalize your desire to signal and do what is easy and what appeals to your vanity. Care to list your true rejections?
And if you think that I am being uncharitable in my interpretation of your true motivations, then be sure to notice the symmetry.
/endrant
That was quite a rant!
I hope I don’t come across as thinking “the worst” about those involved. I expect they are all very nice and sincere. By way of comparison, not all cults have deliberately exploitative ringleaders.
Really? Really? You actually think the level of DOOM is cold realism—and not a ploy to attract funding? Why do you think that? De Garis and Warwick were doing much the same kind of attention-seeking before the SIAI came along—DOOM is an old school of marketing in the field.
You encourage me to speculate about the motives of the individuals involved. While that might be fun, it doesn’t seem to matter much—the SIAI itself is evidently behaving as though it wants dollars, attention, and manpower—to help it meet its aims.
FWIW, I don’t see what I am saying as particularly “contrarian”. A lot of people would be pretty sceptical about the end of the world being nigh—or the idea that a bug might take over the world—or the idea that a bunch of saintly programmers will be the ones to save us all. Maybe contrary to the ideas of the true believers—if that is what you mean.
Anyway, the basic point is that if you are interested in DOOM, or p(DOOM), consulting a DOOM-mongering organisation, that wants your dollars to help them SAVE THE WORLD may not be your best move. The “follow the money” principle is simple—and often produces good results.
Right, I said metacontrarian. Although most LW people seem SIAI-agnostic, a lot of the most vocal and most experienced posters are pro-SIAI or SIAI-related, so LW comes across as having a generally pro-SIAI attitude, which is a traditionally contrarian attitude. Thus going against the contrarian status quo is metacontrarian.
I’m confused. Anyone trying to accomplish anything is going to try to get dollars, attention, and manpower. I’m confused as to how this is relevant to the merit of SIAI’s purpose. SIAI’s never claimed to be fundamentally opposed to having resources. Can you expand on this?
What makes that comparison spring to mind? Everyone is incredibly critical of Eliezer, probably much more so than he deserves, because everyone is racing to be first to establish their non-cult-victim status. Everyone at SIAI has different beliefs about the relative merits of different strategies for successful FAI development. That isn’t a good thing—fractured strategy is never good—but it is evidence against cultishness. SIAI grounds its predictions in clear and careful epistemology. SIAI publishes in academic journals, attends scientific conferences, and hosts the Singularity Summit, where tons of prominent high status folk show up to speak about Singularity-related issues. Why is cult your choice of reference class? It is no more a cult than a typical global warming awareness organization. It’s just that ‘science fiction’ is a low status literary genre in modern liberal society.
I don’t know about anybody else, but I am somewhat disturbed by Eliezer’s persistent use of hyphens in place of em dashes, and am very concerned that it could be hurting SIAI’s image.
And I say the same about his use of double spacing. It’s an outdated and unprofessional practice. In fact, Anna Salamon and Louie Helm are 2 other SIAI folk that engage in this abysmal writing style, and for that reason I’ve often been tempted to write them off entirely. They’re obviously not cognizant of the writing style of modern academic thinkers. The implications are obvious.
Another reason that I suspect is more important than trying to signal non-cult-victim status is that people who do want to be considered part of the cult believe that the cause is important and believe that Eliezer’s mistakes could destroy the world (for example).
To recap, the SIAI is funded by donations from those who think that they will help prevent the end of the world at the hands of intelligent machines. For this pitch to work, the world must be at risk—in order for them to be able to save it. The SIAI face some resistance over this point, and these days, much of their output is oriented towards convincing others that these may be the end days. Also there will be a selection bias, with those most convinced of a high p(DOOM) most likely to be involved. Like I said, not necessarily the type of organisation one would want to approach if seeking the facts of the matter.
You pretend to fail to see connections between the SIAI and an END OF THE WORLD cult—but it isn’t a terribly convincing act.
For the connections, see here. For protesting too much, see You’re calling who a cult leader?
No, I see it, look further, and find the model lacking in explanatory power. It selectively leaves out all kinds of useful information that I can use to control my anticipations.
Hmuh, I guess we won’t be able to make progress, ’cuz I pretty much wholeheartedly agree with Vladimir when he says:
and Nick Tarleton when he says:
“This one is right” for example. ;)
The groupies never seem to like the comparison with THE END OF THE WORLD cults. Maybe it is the “cult” business—or maybe it is because all of their predictions of the end of the world were complete failures.
If they weren’t, we wouldn’t be here to see the failure.
It therefore seems to me that using this to “disprove” an end-of-the-world claim makes as much sense as someone trying to support a theory by saying, “They laughed at Galileo, too!”
IOW, you are simply placing the prediction in a certain outside-view class, without any particular justification. You could just as easily put SIAI claims in the class of “predictions of disaster that were averted by hard work”, and with equal justification. (i.e., none that you’ve given!)
[Note: this comment is neither pro-SIAI nor anti-SIAI, nor any comment on the probability of their claims being in any particular class. I’m merely anti-arguments-that-are-information-free. ;-) ]
The argument is not information free. It is just lower on information than implied. If people had never previously made predictions of disaster and everything else was equal then that tells us a different thing than if humans predicted disaster every day. This is even after considering selection effects. I believe this applies somewhat even considering the possibility of dust.
Uh, it wasn’t given as an “argument” in the first place. Evidence which does more strongly relate to p(DOOM) includes the extent to which we look back and see the ashes of previous failed technological civilisations, and past major mishaps. I go into all this in my DOOM video.
No, wait, there’s still something I just don’t understand. In a lot of your comments it seems you do a good job of analyzing the responses of ‘normal people’ to existential risks: they’re really more interested in lipstick, food, and sex, et cetera. And I’m with you there, evolution hasn’t hardwired us with a ‘care about low probabilities of catastrophe’ desire; the problem wasn’t really relevant in the EEA, relatively speaking.
But then it seems like you turn around and do this weird ‘ought-from-is’ operation from evolution and ‘normal people’ to how you should engage in epistemic rationality, and that’s where I completely lose you. It’s like you’re using two separate but to me equally crazy ought-from-is heuristics. The first goes like ‘Evolution didn’t hard code me with a desire to save the world, I guess I don’t actually really want to save the world then.’ And the second one is weirder and goes more like ‘Oh, well, evolution didn’t directly code good epistemology into my brain, it just gave me this comparatively horrible analogical reasoning module; I guess I don’t really want good epistemic rationality then’.
It ends up looking like you’re using some sort of insane bizarre sister of the outside view that no one can relate with.
It’s like you’re perfectly describing the errors in most peoples’ thinking but then at the end right when you should say “Haha, those fools”, you instead completely swerve and endorse the errors, then righteously champion them for (evolutionary psychological?) reasons no one can understand.
Can you help me understand?
“‘Oh, well, evolution didn’t directly code good epistemology into my brain, it just gave me this comparatively horrible analogical reasoning module; I guess I don’t really want good epistemic rationality then’.”
...looks like it bears very little resemblance to anything I have ever said. I don’t know where you are getting it from.
Perhaps it is to do with the idea that not caring about THE END OF THE WORLD is normally a rational action for a typical gene-propagating agent.
Such agents should normally be concerned with having more babies than their neighbours do—and should not indulge in much paranoia about THE END OF THE WORLD. That is not sticking with poor quality cognition, it is often the correct thing to do for an agent with those aims.
If p(DOOM) gets really large, the correct strategy might change. If it turns into a collective action problem with punishment for free riders, the correct strategy might change. However, often THE END OF THE WORLD can be rationally perceived to be someone else’s problem. Expending resources fighting DOOM usually just means you get gradually squeezed out of the gene pool.
The DOOM enthusiasts typically base their arguments on utilitarianism. A biologist’s perspective on that is that it is sometimes an attempt to signal unselfishness—albeit usually a rather unbelievable one—and sometimes an attempt to manipulate others into parting withe their cash.
Looking back I think I read more into your comments than was really there; I apologize.
I agree here. The debate is over whether or not the current situation is normal.
Tentatively agreed. Normally, even if nanotech’s gonna kill everyone, you’re not able to do much about it anyway. But I’m not sure why you bring up “Expending resources fighting DOOM usually just means you get gradually squeezed out of the gene pool.” when most people aren’t at all trying to optimize the amount of copies of their genes in the gene pool.
Generally this is true, especially before science was around to make such meme pushing low status. But it’s also very true of global warming paranoia, which is high status even among intellectuals for some reason. (I should probably try to figure out why.) I readily admit that certain values of outside view will jump from that to ‘and so all possible DOOM-pushing groups are just trying to signal altruism or swindle people’—but rationality should help you win, and a sufficiently good rationalist should trust themselves to try and beat the outside view here.
So maybe instead of saying ‘poor epistemology’ I should say ‘odd emphasis on outside view when generally people trust their epistemology better than that beyond a certain point of perceived rationality in themselves’.
The primary thing I find objectionable about your commenting on this subject is the persistent violation of ordinary LW etiquette, e.g. by REPEATEDLY SHOUTING IN ALL CAPS and using ad hominem insults, e.g. “groupies.”
I’m sorry to hear about your issues with my writing style :-(
I have been consistently capitalising DOOM—and a few related terms—for quite a while. I believe these terms deserve special treatment—in accordance with how important everybody says they are—and ALL-CAPS is the most portable form of emphasis across multiple sites and environments. For the intended pronunciation of phrases like DOOM, SOON, see my DOOM video. It is not shouting. I rate the effect as having net positive value in the context of the intended message—and will put up with your gripes about it.
As for “groupies”—that does seem like an apt term to me. There is the charismatic leader—and then there is his fan base—which seems to have a substantial element of young lads. Few other terms pin down the intended meaning as neatly. I suppose I could have said “young fan base”—if I was trying harder to avoid the possibility of causing offense. Alas, I am poorly motivated to bother with such things. Most of the “insiders” are probably going to hate me anyway—because of my message—and the “us” and “them” tribal mentality.
Did you similarly give Yudkowsky a public ticking-off when he recently delved into the realm of BOLD ALL CAPS combined with ad-hominen insults? His emphasis extended to whole paragraphs—and his insults were considerably more personal—as I recall. Or am I getting special treatment?
May I suggest as a matter of style that “Doom” more accurately represents your intended meaning of specific treatment and usage as a noun that isn’t just a description? Since ALL CAPS has the interpretation of mere shouting you fail to communicate your meaning effectively if you use all caps instead of Title Case in this instance. Consider ‘End Of The World’ as a superior option.
Let’s be honest. If we’re going to consider that incident as an admissible tu quoque to any Yudkowskian then we could justify just about any instance of obnoxious social behaviour thereby. I didn’t object to your comments here simply because I didn’t consider them out of line on their own merits. I would have no qualms about criticising actual bad behaviour just because Eliezer acted like a douche.
Mind you I am not CarlShuman and the relevance of hypocrisy to Carl’s attempt of a status slap is far greater than if it was an attempt by me. Even so you could replace “Or am I getting special treatment?” with “Or are you giving me special treatment?” and so reduce the extent that you signal that it is ok to alienate or marginalise you.
Title Caps would be good too—though “DOOM” fairly often appears at the start of a sentence—and there it would be completely invisible. “Doom” is milder. Maybe “DOOM” is too much—but I can live with it. After all, this is THE END OF THE WORLD we are talking about!!! That is pretty ###### important!!!
If you check with the THE END IS NIGH placards, they are practically all in ALL CAPS. I figure those folk are the experts in this area—and that by following their traditions, I am utilizing their ancient knowledge and wisdom on the topic of how best to get this critical message out.
A little shouting may help ensure that the DOOM message reaches distant friends and loved ones...
Or utterly ignored because people think you’re being a tool. One or the other. (I note that this is an unfortunate outcome because apart from this kind of pointless contrariness people are more likely to acknowledge what seem to be valid points in your response to Carl. I don’t like seeing the conversational high ground going to those who haven’t particularly earned it in the context.)
Well, my CAPS are essentially a parody. If the jester capers in the same manner as the noble, there will often be some people who will think that he is dancing badly—and not understand what is going on.
There will be others who understand perfectly and think he’s doing a mediocre job of it.
You ignored the word ‘repetitive.’ As you say, you have a continuing policy of carelessness towards causing offense, i.e. rudeness. And no, I don’t think that the comment you mention was appropriate either (versus off-LW communication), but given that it was deleted I didn’t see reason to make a further post about it elsewhere. Here are some recent comment threads in which I called out Eliezer and others for ad hominem attacks.
...not as much as you ignored the words “consistently” and “for quite a while”.
I do say what I mean. For instance, right now you are causing me irritation—by apparently pointlessly wasting my time and trying to drag me into the gutter. On the one hand, thanks for bothering with feedback… …but on the other, please go away now, Carl—and try to find something more useful to do than bickering here with me.
I don’t think it’s that. I think it’s just annoyance at perceived persistently bad epistemology in people making the comparison over and over again as if each iteration presented novel predictions with which to constrain anticipation.
If there really is “bad epistemology”, feel free to show where.
There really is an analogy between the SIAI and various THE END OF THE WORLD cults—as I previously spelled out here.
You might like to insinuate that I am reading more into the analogy than it deserves—but basically, you don’t have any case there that I can detect.
Everyone knows the analogy exists. It’s just a matter of looking at the details to see if that has any bearing on whether or not SIAI is a useful organization or not.
You asked: “What makes that comparison spring to mind?” when I mentioned cults.
Hopefully, you now have your answer—for one thing, they are like an END OF THE WORLD cult—in that they use fear of THE END OF THE WORLD as a publicity and marketing tool.
Such marketing has a long tradition behind it—e.g see the Daisy Ad.
Also, FOOM rhymes with DOOM. There!
And this response was upvoted … why? This is supposed to be a site where rational discourse is promoted, not a place like Pharyngula or talk.origins where folks who disagree with the local collective worldview get mocked by insiders who then congratulate each other on their cleverness.
I voted it up. It was short, neat, and made several points.
Probably the main claim is that that the relationship between the SIAI and previous END OF THE WORLD outfits is a meaningless surface resemblance.
My take of the issue is that DOOM is—in part—a contagious mind-virus, with ancient roots—which certain “vulnerable” people are inclined to spread around—regardless of whether it makes much sense or not.
With the rise of modern DOOM “outfits”, we need to understand the sociological and memetic aspects of these things all the more:
Will we see more cases of “DOOM exploitation”—from those out to convert fear of the imminent end into power, wealth, fame or sex?
Will a paranoid society take steps to avoid the risks? Will it freeze like a rabbit in the headlights? Or will it result in more looting and rape cases?
What is the typical life trajectory of those who get involved with these outfits? Do they go on to become productive members of society? Or do they wind up having nightmares about THE END OF THE WORLD—while neglecting their interpersonal relationships and personal hygene—unless their friends and family stage an “intervention”?
...and so on.
Rational agents should understand the extent to which they are infected by contagious mind viruses—that spread for their own benefit and without concern for the welfare of their hosts. DOOM definitely has the form of such a virus. The issue as I see it is: how much of the observed phenomenon of the of modern-day DOOM “outfits” does it explain?
To study this whole issue, previous doomsday cults seem like obvious and highly-relevant data points to me. In some cases their DOOM was evidently a complete fabrication. They provide pure examples of fake DOOM—exactly the type of material a sociologist would need to understand that aspect of the DOOM-mongering phenomeon.
I agree that it’s annoying when people are mocked for saying something they didn’t say. But Nesov was actually making an implicit argument here, not just having fun: he was pointing out that timtyler’s analogies tend to be surface-level and insubstantive. The kind of thing that I’ve seen on Pharyngula are instead unjustified ad hominem attacks that don’t shed any light on possible flaws in the poster’s arguments. That said, I think Nesov’s comment was flirting with the line.
In the case of Tim in particular, I’m way past that.
“Way past that” meaning “so exasperated with Tim that rational discourse seems just not worth it”? Hey, I can sympathize. Been there, done that.
But still, it annoys me when people are attacked by mocking something that they didn’t say, but that their caricature should have said (in a more amusing branch of reality).
It annoys me more when that behavior is applauded.
And it strikes me as deeply ironic when it happens here.
That’s very neatly put.
I’m not dead certain it’s a fair description of Vladimir Nesov said, but describes a lot of behavior I’ve seen. And there’s a parallel version about the branches of reality which allow for easier superiority and/or more outrage.
The error Tim makes time and again is finding shallow analogies between activity of people concerned with existential risk and doomsday cults, and loudly announcing them, lamenting that it’s not proper that this important information is so rarely considered. Yet the analogies are obvious and obviously irrelevant. My caricature simply followed the pattern.
The analogies are obvious. They may be irrelevant. They are not obviously irrelevant.
Too fine a distinction to argue, wouldn’t you agree?
Talking about obviousness as if it was inherent in a conclusion is typical mind projection fallacy. What it generally implies (and what I think you mean) is that any sufficiently rational person would see it; but when lots of people don’t see it, calling it obvious is against social convention (it’s claiming higher rationality and thus social status than your audience). In this case I think that to your average reader the analogies aren’t obviously irrelevant, even though I personally do find them obviously irrelevant.
When you’re trying to argue that something is the case (ie. that the analogies are irrelevant) the difference between what you are arguing being OBVIOUS and it merely being POSSIBLE is extremely vast.
You seem to confuse the level of certainty with difficulty of discerning it.
You made a claim that they were obviously irrelevant.
The respondant expressed uncerainty as to their irrelevance “They may be irrelevant.” as opposed to the certainty in “The analogies are obvious.” and “They are not obviously irrelevant.”
That is a distinction between something being claimed as obvious and the same thing being seen as doubtful.
If you do not wish to explain a point there are many better options* than inaccurately calling it obvious. For example, linking to a previous explanation.
*in rationality terms. In argumentation terms, these techniques are often inferior to the technique of the emperor’s tailors
Uh, they are not “obviously irrelevant”. The SIAI behaves a bit like other DOOM-mongering organisations have done—and a bit like other FUD marketing organisations have done.
Understanding the level of vulnerability of the human psyche to the DOOM virus is a pretty critical part of assessing what level of paranoia about the topic is reasonable.
It is, in fact very easy to imagine how a bunch of intrepid “friendly folk” who think they are out to save the world—might—in the service of their cause—exaggerate the risks, in the hope of getting attention, help and funds.
Indeed, such an organisation is most likely to be founded by those who have extreme views about the risks, attract others who share similar extreme views, and then have a hard time convincing the rest of the world that they are, in fact, correct.
There are sociological and memetic explanations for the “THE END IS NIGH” phenomenon that are more-or-less independent of the actual value of p(DOOM). I think these should be studied more, and applied to this case—so that we can better see what is left over.
There has been some existing study of DOOM-mongering. There is also the associated Messiah complex—an intense desire to save others. With the rise of the modern doomsday “outfits”, I think more study of these phenomenon is warranted.
Sometimes it is fear that is the mind-killer. FUD marketing exploits this to help part marks from their money. THE END OF THE WORLD is big and scary—a fear superstimulus—and there is a long tradition of using it to move power around and achieve personal ends—and the phenomena spreads around virally.
I appreciate that this will probably turn the stomachs of the faithful—but without even exploring the issue, you can’t competently defend the community against such an analysis—because you don’t know to what extent it is true—because you haven’t even looked into it.
Another reason that I suspect is more important than trying to signal non-cult-victim status is that people who do want to be considered part of the cult believe that the cause is important and believe that Eliezer’s mistakes could destroy the world (for example).
I didn’t say anyone was “racing to be first to establish their non-cult-victim status”—but it is certainly a curious image! [deleted parent comment was a dupe].
Oops, connection troubles then missed.
Tim, do you think that nuclear-disarmament organizations were inherently flawed from the start because their aim was to prevent a catastrophic global nuclear war? Would you hold their claims to a much higher standard than the claims of organizations that looked to help smaller numbers of people here and now?
I recognize that there are relevant differences, but merely pattern-matching an organization’s conclusion about the scope of their problem, without addressing the quality of their intermediate reasoning, isn’t sufficient reason to discount their rationality.
Will said “meta-contrarian,” which refers to the recent meta-contrarians are intellectual hipsters post.
I also think you see yourself as trying to help SIAI see how they look to “average joe” potential collaborators or contributors, while Will sees your criticisms as actually calling into question the motives, competence, and ingenuity of SIAI’s staff. If I’m right, you’re talking at cross-purposes.
Reforming the SIAI is a possibility—but not a terribly realistic one, IMO. So, my intended audience here is less that organisation, and more some of the individuals here who I share interests with.
Oh, that might be. Other comments by timtyler seemed really vague but generally anti-SIAI (I hate to set it up as if you could be for or against a set of related propositions in memespace, but it’s natural to do here, meh), so I assumed he was expressing his own beliefs, and not a hypothetical average joe’s.
This is an incredibly anti-name-calling community. People ascribe a lot of value to having “good” discussions (disagreement is common, but not adversarialism or ad hominems.) LW folks really don’t like being called a cult.
SIAI isn’t a cult, and Eliezer isn’t a cult leader, and I’m sure you know that your insinuations don’t correspond to literal fact, and that this organization is no more a scam than a variety of other charitable and advocacy organizations.
I do think that folks around here are over-sensitive to normal levels of name-calling and ad hominems. It’s odd. Holding yourself above the fray comes across as a little snobbish. There’s a whole world of discourse out there, people gathering evidence and exchanging opinions, and the vast majority of them are doing it like this: UR A FASCIST. But do you think there’s therefore nothing to learn from them?
I think the reasoning goes something like:
Existential risks are things that could destroy the world as we know it.
Existential risk charities work to reduce such risks.
Existential risk charities use donations to perform said task
Giving to x-risk charities is conducive to saving the world.
Before looking at evidence for or against the effectiveness of particular x-risk charities our prior expectation should be that people who dedicate themselves to doing something are more likely to contribute progress towards that goal than to sabotage it.
This is only true if it is the case that the first-order effect of legalizing drugs (legality would encourage more people to take them) outweighs second order effects. An example of the second order effects is the fact that the price is higher encourages production and distribution. Or the fact the that illegality allows them to be used as signals of rebellion. Legalizing drugs would potentially put distribution in the hands of more responsible people.And so forth.
As the evidence based altruism people have found, improving the world is a lot harder than it looks.
I actually disagree with this statement outright. First of all, ignoring the existence of a specific piece of evidence is not the same as being wholly ignorant of the workings of evolution. Second, I think that the use or abuse of data (false or true) leading to the mistreatment of humans is a worse outcome than the ignorance of said data. Science isn’t a goal in and of itself—it’s a tool, a process invented for the betterment of humanity. It accomplishes that admirably, better than any other tool we’ve applied to the same problems. If the use of the tool, or in this case one particular end of the tool, causes harm, perhaps it’s better to use another end (a different area of science than genetics), or the same one in a different environment (in a time and place where racial inequality and bias are not so heated and widespread—our future, if we’re lucky). Otherwise, we’re making the purpose of the tool subservient to the use of the tool for its own sake—pounding nails into the coffee table.
Besides—anecdotally, people who think that the genetic differences between races are important incite less violence than people who think that not being a bigot is important. If, as you posited, one had to choose. ;)
I have a couple other objections (really? sex discrimination is over? where was I?) but other people have covered them satisfactorily.
New here; can I get a brief definition of this term? I’ve gotten the gist of what it means by following a couple of links, I just want to know where the x bit comes from. Didn’t find it on the site’s wiki or the internet at large.
X-risk stands for existential risk.
It about possible events that risk ending the existence of the human race.
Got it; thank you.
What do you have in mind?
I’m not sure what “what” would refer to here. I didn’t have an incident in mind, I’m just giving my impression of public perception (the first person gets called racist, and the second one gets called, well, normal, one hopes). It wasn’t meant to be taken very seriously.