I don’t think this post was well-written, at the least. I didn’t even understand the tl;dr?
tldr; Is the SIAI evidence-based or merely following a certain philosophy? I’m currently unable to judge if the Less Wrong community and the SIAI are updating on fictional evidence or if the propositions, i.e. the basis for the strong arguments for action that are proclaimed on this site, are based on fact.
I don’t see much precise expansion on this, except for MWI? There’s a sequence on it.
And that is my problem. Given my current educational background and knowledge I cannot differentiate LW between a consistent internal logic, i.e. imagination or fiction and something which is sufficiently based on empirical criticism to provide a firm substantiation of the strong arguments for action that are proclaimed on this site.
Have you read the sequences?
As for why there aren’t more people supporting SIAI, first of all, it’s not widely known, second of all, it’s liable to be dismissed on first impressions. Not many have examined the SIAI. Also, only (http://en.wikipedia.org/wiki/Religion#cite_ref-49)[4% of the general public in the US believe in neither a god nor a higher power]. The majority isn’t always right.
I don’t understand why this post has upvotes. It was unclear and seems topics went unresearched. The usefulness of donating to the SIAI has been discussed before, I think someone probably would’ve posted a link if asked in the open thread.
I think the obvious answer to this is that there are a significant number of people out there, even out there in the LW community, who share XiXiDu’s doubts about some of SIAIs premises and conclusions, but perhaps don’t speak up with their concerns either because a) they don’t know quite how to put them into words, or b) they are afraid of being ridiculed/looked down on.
Unfortunately, the tone of a lot of the responses to this thread lead me to believe that those motivated by the latter option may have been right to worry.
Yeah, I agree (no offense XiXiDu) that it probably could have been better written, cited more specific objections etc. But the core sentiment is one that I think a lot of people share, and so it’s therefore an important discussion to have. That’s why it’s so disappointing that Eliezer seems to have responded with such an uncharacteristically thin skin, and basically resorted to calling people stupid (sorry, “low g-factor”) if they have trouble swallowing certain parts of the SIAI position.
I think your upvote probably backfired, because (I’m guessing) Eliezer got frustrated that such a badly written post got upvoted so quickly (implying that his efforts to build a rationalist community were less successful than he had thought/hoped) and therefore responded with less patience than he otherwise might have.
Then you should have written your own version of it. Bad posts that get upvoted just annoy me on a visceral level and make me think that explaining things is hopeless, if LWers still think that bad posts deserve upvotes. People like XiXiDu are ones I’ve learned to classify as noisemakers who suck up lots of attention but who never actually change their minds enough to start pitching in, no matter how much you argue with them. My perceptual system claims to be able to classify pretty quickly whether someone is really trying or not, and I have no concrete reason to doubt it.
I guess next time I’ll try to remember not to reply at all.
Everyone else, please stop upvoting posts that aren’t good. If you’re interested in the topic, write your own version of the question.
What are you considering as pitching in? That I’m donating as I am, or that I am promoting you, LW and the SIAI all over the web, as I am doing?
You simply seem to take my post as hostile attack rather than the inquiring of someone who happened not to be lucky enough to get a decent education in time.
All right, I’ll note that my perceptual system misclassified you completely and consider that concrete reason to doubt it from now on.
Sorry.
If you are writing a post like that one it is really important to tell me that you are an SIAI donor. It gets a lot more consideration if I know that I’m dealing with “the sort of thing said by someone who actually helps” and not “the sort of thing said by someone who wants an excuse to stay on the sidelines, and who will just find another excuse after you reply to them”, which is how my perceptual system classified that post.
The Summit is coming up and I’ve got lots of stuff to do right at this minute, but I’ll top-comment my very quick attempt at pointing to information sources for replies.
What I mean to say by using that idiom is that I cannot expect, given my current knowledge, to get the promised utility payoff that would justify to make the SIAI a prime priority. That is, I’m donating to the SIAI but also spend considerable amounts of resources maximizing utility at present.
So you might suggest to your perceptual system to read the post first (at least before issuing a strong reply).
I also donated to SIAI, and it was almost all the USD I had at the time, so I hope posters here take my questions seriously. (I would donate even more if someone would just tell me how to make USD.)
Also, I don’t like when this internet website is overloaded with noise posts that don’t accomplish anything.
Clippy, you represent a concept that is often used to demonstrate what a true enemy of goodness in the universe would look like, and you’ve managed to accrue 890 karma. I think you’ve gotten a remarkably good reception so far.
Yeah, I want to know how to either produce the notes that will be recognized as USD, or access the financial system in a way that I can believably tell it that I own a certain amount of USD. The latter method could involve root access to financial institutions.
All the other methods of getting USD are disproportionately hard (_/
I’ll donate again in the next few days and tell you what name and the amount. I don’t have much, but so that you see that I’m not just making this up. Maybe you can also check the previous donation then.
And for the promoting, everyone can Google it. I link people up to your stuff almost every day. And there are people here who added me to Facebook and if you check my info you’ll see that some of my favorite quotations are actually yours.
And how come that on my homepage, if you check the sidebar, your homepage and the SIAI are listed under favorite sites, for many years now?
I’m the kind of person who has to be skeptic about everything and if I’m bothered too much by questions I cannot resolve in time I do stupid things. Maybe this post was stupid, I don’t know.
Sorry about this sounding impolite towards XiXiDu, but I’ll use this opportunity to note that it is a significant problem for SIAI, that there are people out there like XiXiDu promoting SIAI even though they don’t understand SIAI much at all.
I don’t know what’s the best attitude to try to minimize the problem this creates, that many people will first run into SIAI through hearing about it from people who don’t seem very clueful or intelligent. (That’s real bayesian evidence for SIAI being a cult or just crazy, and many people then won’t acquire sufficient additional evidence to update out of the misleading first impression—not to mention that the biased way of getting stuck in first impressions is very common also.)
Personally, I’ve adopted the habit of not even trying to talk about singularity stuff to new people who aren’t very bright. (Of course, if they become interested despite this, then they can’t just be completely ignored.)
I thought about that too. But many people outside this community suspect me, as they often state, to be intelligent and educated. And I mainly try to talk to people in the academics. You won’t believe that even I am able to make them think that I’m one of them, up to the point of correcting errors in their calculations (it happened). Many haven’t even heard about Bayesian inference by the way...
The way I introduce people to this is not by telling them about the risks of AGI but rather linking them up to specific articles on lesswrong.com or telling them about how the SIAI tries to develop ethical decision making etc.
I’ve grown up in a family of Jehovah’s Witnesses, I know how to start selling bullshit. Not that the SIAI is bullshit, but I’d never use words like ‘Singularity’ while promoting it to people I don’t know.
Many people know about the transhumanist/singularity fraction already and think it is complete nonsense, so I often can only improve their opinion.
There are people teaching on university level that told me I convinced them that he (EY) is to be taken seriously.
What you state is good evidence that you are not one of those too stupid people I was talking about (even though you have managed to not understand what SIAI is saying very well). Thanks for presenting the evidence, and correcting my suspicion that someone on your level of non-comprehension would usually end up doing more harm than good.
Although I personally don’t care much if I’m called stupid, if I think it is justified, I doubt this attitude is very appealing to most people.
Where do you draw the line between being stupid and simply uneducated or uninformed?
...even though you have managed to not understand what SIAI is saying very well...
I’ve never read up on their program in the first place. When thinking about turning those comments the OP is based on into a top-level post I have been pondering much longer about the title than the rest of what I said until I became too lazy and simply picked the SIAI as punching bag to direct my questions at. I thought it would sufficiently work to steer some emotions. But after all that was most of what it did accomplish, rather than some answers.
What I really was on about was the attitude of many people here, especially regarding the posts related to the Roko-deletion-incident. I was struck by the apparent impact it had. It was not just considered to be worth sacrificing freedom of speech for it but people, including some working for the SIAI, actually had nightmares and suffered psychological trauma. I think I understood the posts and comments, as some told me over private message after inquiring about my knowledge, but however couldn’t believe that something that far would be considered to be reasonably evidence-based to be worried to such an extent.
But inquiring about that would have turned the attention back to the relevant content. And after all I wanted to find out if such reactions are justified before deciding to spread the content anyway.
You admit you’ve never bothered to read up on what SIAI is about in the first place. Don’t be surprised if people don’t have the best possible attitude if despite this you want them to spend a significant amount of time explaining to you personally the very same content that is already available but you just haven’t bothered to read.
Might as well link again the one page that I recommend as the starting point in getting to know what it is exactly that SIAI argues:
I also think it’s weird that you’ve actually donated money to SIAI, despite not having really looked into what it is about and how credible the arguments are. I personally happen to think that SIAI is very much worth supporting, but there doesn’t seem to be any way how you could have known that before making your donations, and so it’s just luck that it actually wasn’t a weird cult that your way of making decisions lead you to give money to.
(And part of the reason I’m being this blunt with you is that I’ve formed the impression that you won’t take it in a very negative way, in the way that many people would. And on a personal level, I actually like you, and think we’d probably get along very well if we were to meet IRL.)
I also think it’s weird that you’ve actually donated money to SIAI, despite not having really looked into what it is about and how credible the arguments are.
I’ve actually this little crazy conspiracy theory in my head that EY is such a smart fellow that he was able to fool a bunch of nonconformists to make him live of their donations.
Why I donate despite that? I’ve also donated money to Peter Watts getting into the claws of the American justice. Wikipedia, TrueCrypt, the Kahn Academy and many more organisations and people. Why? They make me happy. And there’s lots of cool stuff coming from EY, whether he’s a cult leader or not.
I’d probably be more excited if it turned out to be a cult and donate even more. That be hilarious. On the other hand I suspect Scientology not be to a cult. I think they are just making fun of religion and at the same time are some really selfish bastards who live of the money of people dumb enough to actually think they are serious. If they told me this, I’d join.
On the other hand I suspect Scientology not be to a cult. I think they are just making fun of religion and at the same time are some really selfish bastards who live of the money of people dumb enough to actually think they are serious. If they told me this, I’d join.
SCIENTOLOGY IS DANGEROUS. Scientology is not a joke and joining them is not something to be joked about. The fifth level of precaution is absolutely required in all dealings with the Church of Scientology and its members. A few minutes of research with Google will turn up extraordinarily serious allegations against the Church of Scientology and its top leadership, including allegations of brainwashing, abducting members into slavery in their private navy, framing their critics for crimes, and large-scale espionage against government agencies that might investigate them.
I am a regular Less Wrong commenter, but I’m making this comment anonymously because Scientology has a policy of singling out critics, especially prominent ones but also some simply chosen at random, for harrassment and attacks. They are very clever and vicious in the nature of the attacks they use, which have included libel, abusing the legal system, and framing their targets for crimes they did not commit. When protests are conducted against Scientology, the organizers advise all attendees to wear masks for their own safety, and I believe they are right to do so.
If you reply to this comment or discuss Scientology anywhere on the internet, please protect your anonymity by using a throwaway account. To discourage people from being reckless, I will downvote any comment which mentions Scientology and which looks like it’s tied to a real identity.
I’d probably be more excited if it turned out to be a cult and donate even more. That be hilarious. On the other hand I suspect Scientology not be to a cult. I think they are just making fun of religion and at the same time are some really selfish bastards who live of the money of people dumb enough to actually think they are serious. If they told me this, I’d join.
You sound more like a Discordian than a Singularitatian.
I’ve actually this little crazy conspiracy theory in my head that EY is such a smart fellow that he was able to fool a bunch of nonconformists to make him live of their donations.
I had the same idea! It’s also interesting to consider if some discriminating evidence could (realistically) exist in either sense.
I’m pretty sure there are easier ways to make a living off a charity than to invent a cause that’s nowhere near the mainstream and which is likely to be of interest to only a tiny minority.
Admittedly, doing it that way means you won’t have many competitors.....
The basic hypothesis is that AI theorising was already (one of) his main interest/s, and founding SIAI was the easiest path for him to be able to make a living doing the stuff he enjoys full-time.
Eliezer says that AI theorizing became as interesting to him as it has because it is the most effective way for him to help people. Having observed his career (mostly through the net) for ten years, I would assign a very high (.96) probability that the causality actually runs that way rather than his altruism’s being a rationalization for his interest in getting paid for AI theorizing.
Now as to the source of his altruism, I am much less confident, e.g., about which way he would choose if he found himself at a major decision point with large amounts of personal and global expected utility on the line where he had to choose between indelible widespread infamy or even total obscurity and helping people.
Not really useful as evidence against the mighty conspiracy theory, though—one would make identical statements to that effect whether he was honest, consciously deceiving, or anywhere inbetween.
Would you happen to remember an instance of Eliezer making an embarrassing / self-damaging admission when you couldn’t see any reason for him to do so outside of an innate preference for honesty?
Would you happen to remember an instance of Eliezer making an embarrassing / self-damaging admission when you couldn’t see any reason for him to do so outside of an innate preference for honesty?
How would that constitute evidence against the “mighty conspiracy theory”? Surely Eliezer could have foreseen that someone would ask this question sooner and later, and made some embarrassing / self-damaging admission just to cover himself.
Good point. I didn’t think much about the question, and it should have been obvious that the hypothesis of him simulating honesty is not strictly falsifiable by relying solely on his words.
Ok, new possibility for falsification: before SIAI was founded, a third party offered him a job in AI research that was just as interesting and brought at least as many assorted perks, but he refused because he genuinely thought FAI research was more important. Or for that matter any other scenario under which founding SIAI constituted a net sacrifice for Eliezer when not counting the benefit of potentially averting armageddon.
Quite a bit harder to produce, but that’s par for the course with Xanatos-style conspiracy theories.
Actually, I was responding to your “AI theorising was already (one of) his main interest/s”, not your larger point.
I consider the possibility that Eliezer has intentionally deceived his donors all along as so unlikely as to not be worth discussing.
ADDED. Re-reading parent for the second time, I notice your “whether he was honest, consciously deceiving, or anywhere inbetween” (emphasis mine). So, since you (I now realize) probably were entertaining the possibility that he is “unconsciously deceiving” (i.e., has conveniently fooled himself), let me extend my reply.
What one does instead is look at his decisions. And even more you look at what he is able to stay motivated to do over a long period of time. Consider for example the two years he spent blogging about rationality. This is educational writing or communication and it is extremely good educational communication. No matter how smart the person is, he cannot communicate or teach that effectively without doing a heck of a lot of hard work. And IMO no human being can work that hard for two whole years voluntarily (i.e., without fear of losing something he needs or loves and already has) unless the person is deriving some sort of real human satisfaction from the work. (Even with a very strong “negative” motivation like fear, it is hard to work that hard for 2 years without making yourself sick, and E sure did not look or act sick when I chatted with him at a Sep 2009 meetup.) And this is where the explanation gets complicated, and I want to cut it short.
There are only so many kinds of real human motivation. Scientists of course are usually motivated by the pleasure of discovery, of extending their understanding of the world. Many, perhaps most, scientists are motivated by reputation, for the good opinion of other scientists or the public at large. I find it unlikely however that any combination of those 2 motivations would have been enough for any human being to perform the way E did during his 2 years of “educating through blogging”.
So, to summarize, I have some strong or firm reasons to believe that while he was writing those excellent blog posts, E regularly found pleasure and consequently found motivation in the idea of producing understanding in his readers, and this pleasure is an example of a “friendly impulse” or “altruistic desire” in E (part of the implementation in the human mind of the human capacity for what the evolutionary psychologists call reciprocal altruism).
And I know enough psychology to know that if E is capable of being motivated to extremely hard work by “the friendly impulse” when he started his blogging at age 27, then he was also capable of being motivated in his daydreams and in his career planning by “the friendly impulse” when he was a teenager (which is when he says he saw that AI research is the best way to help people and when he began his interest in AI theorizing). (It is rare for a person to be able to learn (even if they really want to) how to find pleasure (and consequently long-term motivation) from altruism / friendliness if they lacked the capacity in their teens like I did.)
Now I am not saying that E does not derive a lot of pleasure from scientific theorizing (most scientists of his caliber do), but I am saying that I believe his statements that the reason that most of his theorizing is about AI rather than string theory or population genetics is what he says it is.
This is all very condensed and it relies on beliefs of mine that are definitely not settled science, e.g., the belief that the only way a person every voluntarily works as hard as E must have for 2 years is if they find pleasure in the work) but it does explain just a little of the basis for the probability assignment I made in grandparent.
I don’t think I find your psychological argument very relevant here. The conspiracy allows—indeed, it makes a cardinal assumption—that Eliezer loves doing what he does, i.e. discussing and spreading ideas about rationality and theorising about AI and futurology; the only proposed dissonance between his statements and his findings would be that he is (whether intentionally or not, see below) overblowing the danger of a near-omnipotent unfriendly AI. And of course, people can be untruthful in one field and still be highly altruist in a hundred others.
Speaking of which, we ended up drifting further from the idea XiXiDu and I were originally entertaining, which was that of a cunning plot to create his dream job. While, only because of his passion for rationality, it would still be interesting if Eliezer were suffering from such a dramatic bias (and it would be downright hilarious if he were truly pulling a fast one), the more such a bias is unconscious and hard to spot, the closer it comes to being a honest mistake, rather than negligence; but it’s not particularly interesting or amusing that someone could have made a honest mistake.
Speaking of which, we ended up drifting further from the idea XiXiDu and I were originally entertaining, which was that of a cunning plot to create his dream job.
Yes, I am a little embarassed that I took the thread on such a sharp and lengthy tangent. I don’t have time to move my comment though.
Eliezer seems to have run your post through some crude heuristic and incorrectly categorized it. While you did make certain errors that many people have observed, I think you deserved a different response.
At least, Eliezer seemingly not realizing that you are a donor means that his treatment of you doesn’t represent how he treats donors.
It has seemed to me for a while that a number of people will upvote any post that goes against the LW ‘consensus’ position on cryonics/Singularity/Friendliness, so long as it’s not laughably badly written.
I don’t think anything Eliezer can say will change that trend, for obvious reasons.
However, most of us could do better in downvoting badly argued or fatally flawed posts. It amazes me that many of the worst posts here won’t drop below 0 for any stated amount of time, and even then not very far. Docking someone’s karma isn’t going to kill them, folks. Do everyone a favor and use those downvotes.
My post is neither badly argued nor fatally flawed as I’ve mainly been asking questions and not making arguments. But if you think otherwise, why don’t you argue where I am fatally flawed?
My post has not been written to speak out against any ‘consensus’, I agree with the primary conclusions but am skeptic about further chains of reasoning based on those conclusions as I don’t perceive them to be based on firm ground but merely be what follows from previous evidence.
And yes, I’m a lazy bum. I’ve not thought about the OP for more than 10 minutes. It’s actually copy and paste work from previous comments. Hell, what have you expected? A dissertation? Nobody else was asking those questions, someone had to.
Then you should have written your own version of it.
I find it difficult to write stuff I don’t believe.
Bad posts that get upvoted just annoy me on a visceral level and make me think that explaining things is hopeless, if LWers still think that bad posts deserve upvotes.
Stimulating critical discussion of the Less Wrong community—specifically: the beliefs almost unanimously shared, and the negativity towards criticsm; as someone who has found Less Wrong extremely helpful, and would hate to see it descend into groupthink and affiliation signalling.
A question to those who dismiss the OP as merely “noise”: what do you make of the nature of this post?
Stimulating critical discussion of the operating premises of the SIAI; as someone who is considering donating and otherwise contributing. This additionally provides elucidation to those in a state of epistemic limbo regarding the various aspects of FAI and the Singularity.
So there’s this very complicated moment of a group coming together, where enough individuals, for whatever reason, sort of agree that something worthwhile is happening, and the decision they make at that moment is: This is good and must be protected. And at that moment, even if it’s subconscious, you start getting group effects. And the effects that we’ve seen come up over and over and over again in online communities...
The first is sex talk, what he called, in his mid-century prose, “A group met for pairing off.” And what that means is, the group conceives of its purpose as the hosting of flirtatious or salacious talk or emotions passing between pairs of members...
The second basic pattern that Bion detailed: The identification and vilification of external enemies. This is a very common pattern. Anyone who was around the Open Source movement in the mid-Nineties could see this all the time...
The third pattern Bion identified: Religious veneration. The nomination and worship of a religious icon or a set of religious tenets. The religious pattern is, essentially, we have nominated something that’s beyond critique. You can see this pattern on the Internet any day you like...
So these are human patterns that have shown up on the Internet, not because of the software, but because it’s being used by humans. Bion has identified this possibility of groups sandbagging their sophisticated goals with these basic urges. And what he finally came to, in analyzing this tension, is that group structure is necessary. Robert’s Rules of Order are necessary. Constitutions are necessary. Norms, rituals, laws, the whole list of ways that we say, out of the universe of possible behaviors, we’re going to draw a relatively small circle around the acceptable ones.
He said the group structure is necessary to defend the group from itself. Group structure exists to keep a group on target, on track, on message, on charter, whatever. To keep a group focused on its own sophisticated goals and to keep a group from sliding into these basic patterns. Group structure defends the group from the action of its own members.
As someone who thought the OP was of poor quality, and who has had a very high opinion of SIAI and EY for a long time (and still has), I’ll say that that “Eliezer Yudkowsky facts” was indeed a lot worse. It was the most embarrassing thing I’ve ever read on this site. Most of those jokes aren’t even good.
“Eliezer Yudkowsky facts” is meant to be fun and entertainment. Do you agree that there is a large subjective component to what a person will think is fun, and that different people will be amused by different types of jokes? Obviously many people did find the post amusing (judging from its 47 votes), even if you didn’t. If those jokes were not posted, then something of real value would have been lost.
The situation with XiXiDu’s post’s is different because almost everyone seems to agree that it’s bad, and those who voted it up did so only to “stimulate discussion”. But if they didn’t vote up XiXiDu’s post, it’s quite likely that someone would eventually write up a better post asking similar questions and generating a higher quality discussion, so the outcome would likely be a net improvement. Or alternatively, those who wanted to “stimulate discussion” could have just looked in the LW archives and found all the discussion they could ever hope for.
If almost everyone thought it’s bad I would expect it to have much more downvotes than upvotes, even given the few people who voted it up to “stimulate discussion”. But you probably know more about statistics than I do, so never mind.
...it’s quite likely that someone would eventually write up a better post asking similar questions.
Before or after the SIAI build a FAI? I waited half a decade for any of those questions to be asked in the first place.
Or alternatively, those who wanted to “stimulate discussion” could have just looked in the LW archives and found all the discussion they could ever hope for.
Right, haven’t thought about that! I’ll be right back reading a few thousand comments to find some transparency.
Do you agree that there is a large subjective component to what a person will think is fun, and that different people will be amused by different types of jokes?
This is true. You might also be able to think of jokes that aren’t worth making even though a group of people would find then genuinely funny.
Can you please explain why you think those jokes shouldn’t have been made? I thought that making fun of authority figures is socially accepted in general, and in this case shows that we don’t take Eliezer too seriously. Do you disagree?
You seemed to seriously imply that Eliezer didn’t understand that the “facts” thread was a joke, while actually he was sarcastically joking by hinting at not getting the joke in the comment you replied to. I downvoted the comment to punish stupidity on LW (nothing personal, believe it or not, in other words it’s a one-step decision based on the comment alone and not on impression made by your other comments). Wei didn’t talk about that.
Making him the subject of a list like that looks plenty serious to me.
Beyond that, I don’t think there’s much that I can say. There’s a certain tone-deafness that’s rubbing me wrong in both the post and in this discussion, but exactly how that works is not something that I know how to convey with a couple of paragraphs of text.
Ok, I think I have an explanation for what’s going on here. Those of us “old hands” who went through the period where LW was OB, and Eliezer and Robin were the only main posters, saw Eliezer as initially having very high status, and considered the “facts” post as a fun way of taking him down a notch or two. Newcomers who arrived after LW became a community blog, on the other hand, don’t have the initial high status in mind, and instead see that post as itself assigning Eliezer a very high status, which they see as unjustified/weird/embarrassing. Makes sense, right?
(Voted parent up from −1, btw. That kind of report seems useful, even if the commenter couldn’t explain why he felt that way.)
I have a theory: all the jokes parse out to “Eliezer is brilliant, and we have a bunch of esoteric in-jokes to show how smart we are”. This isn’t making fun of an authority figure.
This doesn’t mean the article was a bad idea, or that I didn’t think it was funny. I also don’t think it’s strong evidence that LW and SIAI aren’t cults.
ETA: XiXiDu’s comment that this is the community making fun of itself seems correct.
Fact: Evaluating humor about Eliezer Yudkowsky always results in an interplay between levels of meta-humor such that the analysis itself is funny precisely when the original joke isn’t.
I was embarrassed by most of the facts. The one about my holding up a blank sheet of paper and saying “a blank map does not correspond to a blank territory” and thus creating the universe is one I still tell at parties.
That post was meant as a playful muck, it was a joke. It was not meant as a hostile attack. I’ve no idea how you and Aleksei can come to this conclusions about something many people thought was really funny, even outside of the community. That post actually helped to loosen the very stern sentiment of some people regarding you personally and the SIAI.
“Hey, those people are actually able to make fun of themselves, maybe they are not a cult after all...”
I should quit now and for some time stop participating on LW. I have to continue with my studies. I was only drawn here by the deletion incident. Replies and that it is fun to to argue have made me babble too much in the past few days.
Wow, I thought it was one of the best. By that post I actually introduced a philosopher (who teaches in Sweden), who’s been skeptic about EY, to read up on the MWI sequence and afterwards agree that EY is right.
I like that post—of course, few of the jokes are funny, but you read such a thing for the few gems they do contain. I think of it as hanging a lampshade (warning, TV tropes) on one of the problems with this website.
I don’t think this post was well-written, at the least. I didn’t even understand the tl;dr?
I don’t see much precise expansion on this, except for MWI? There’s a sequence on it.
Have you read the sequences?
As for why there aren’t more people supporting SIAI, first of all, it’s not widely known, second of all, it’s liable to be dismissed on first impressions. Not many have examined the SIAI. Also, only (http://en.wikipedia.org/wiki/Religion#cite_ref-49)[4% of the general public in the US believe in neither a god nor a higher power]. The majority isn’t always right.
I don’t understand why this post has upvotes. It was unclear and seems topics went unresearched. The usefulness of donating to the SIAI has been discussed before, I think someone probably would’ve posted a link if asked in the open thread.
I think the obvious answer to this is that there are a significant number of people out there, even out there in the LW community, who share XiXiDu’s doubts about some of SIAIs premises and conclusions, but perhaps don’t speak up with their concerns either because a) they don’t know quite how to put them into words, or b) they are afraid of being ridiculed/looked down on.
Unfortunately, the tone of a lot of the responses to this thread lead me to believe that those motivated by the latter option may have been right to worry.
Personally, I upvoted the OP because I wanted to help motivate Eliezer to reply to it. I don’t actually think it’s any good.
Yeah, I agree (no offense XiXiDu) that it probably could have been better written, cited more specific objections etc. But the core sentiment is one that I think a lot of people share, and so it’s therefore an important discussion to have. That’s why it’s so disappointing that Eliezer seems to have responded with such an uncharacteristically thin skin, and basically resorted to calling people stupid (sorry, “low g-factor”) if they have trouble swallowing certain parts of the SIAI position.
This was exactly my impression, also.
I think your upvote probably backfired, because (I’m guessing) Eliezer got frustrated that such a badly written post got upvoted so quickly (implying that his efforts to build a rationalist community were less successful than he had thought/hoped) and therefore responded with less patience than he otherwise might have.
Then you should have written your own version of it. Bad posts that get upvoted just annoy me on a visceral level and make me think that explaining things is hopeless, if LWers still think that bad posts deserve upvotes. People like XiXiDu are ones I’ve learned to classify as noisemakers who suck up lots of attention but who never actually change their minds enough to start pitching in, no matter how much you argue with them. My perceptual system claims to be able to classify pretty quickly whether someone is really trying or not, and I have no concrete reason to doubt it.
I guess next time I’ll try to remember not to reply at all.
Everyone else, please stop upvoting posts that aren’t good. If you’re interested in the topic, write your own version of the question.
What are you considering as pitching in? That I’m donating as I am, or that I am promoting you, LW and the SIAI all over the web, as I am doing?
You simply seem to take my post as hostile attack rather than the inquiring of someone who happened not to be lucky enough to get a decent education in time.
All right, I’ll note that my perceptual system misclassified you completely and consider that concrete reason to doubt it from now on.
Sorry.
If you are writing a post like that one it is really important to tell me that you are an SIAI donor. It gets a lot more consideration if I know that I’m dealing with “the sort of thing said by someone who actually helps” and not “the sort of thing said by someone who wants an excuse to stay on the sidelines, and who will just find another excuse after you reply to them”, which is how my perceptual system classified that post.
The Summit is coming up and I’ve got lots of stuff to do right at this minute, but I’ll top-comment my very quick attempt at pointing to information sources for replies.
It was actually in the post
So you might suggest to your perceptual system to read the post first (at least before issuing a strong reply).
I also donated to SIAI, and it was almost all the USD I had at the time, so I hope posters here take my questions seriously. (I would donate even more if someone would just tell me how to make USD.)
Also, I don’t like when this internet website is overloaded with noise posts that don’t accomplish anything.
Clippy, you represent a concept that is often used to demonstrate what a true enemy of goodness in the universe would look like, and you’ve managed to accrue 890 karma. I think you’ve gotten a remarkably good reception so far.
I think we have different ideas of noise
Though I would miss you as the LW mascot if you stopped adding this noise.
Depending on your expertise and assets, this site might provide some ways.
I’m pretty sure Clippy meant “make” in a very literal sense.
Yeah, I want to know how to either produce the notes that will be recognized as USD, or access the financial system in a way that I can believably tell it that I own a certain amount of USD. The latter method could involve root access to financial institutions.
All the other methods of getting USD are disproportionately hard (_/
I’ll donate again in the next few days and tell you what name and the amount. I don’t have much, but so that you see that I’m not just making this up. Maybe you can also check the previous donation then.
And for the promoting, everyone can Google it. I link people up to your stuff almost every day. And there are people here who added me to Facebook and if you check my info you’ll see that some of my favorite quotations are actually yours.
And how come that on my homepage, if you check the sidebar, your homepage and the SIAI are listed under favorite sites, for many years now?
I’m the kind of person who has to be skeptic about everything and if I’m bothered too much by questions I cannot resolve in time I do stupid things. Maybe this post was stupid, I don’t know.
Sorry about this sounding impolite towards XiXiDu, but I’ll use this opportunity to note that it is a significant problem for SIAI, that there are people out there like XiXiDu promoting SIAI even though they don’t understand SIAI much at all.
I don’t know what’s the best attitude to try to minimize the problem this creates, that many people will first run into SIAI through hearing about it from people who don’t seem very clueful or intelligent. (That’s real bayesian evidence for SIAI being a cult or just crazy, and many people then won’t acquire sufficient additional evidence to update out of the misleading first impression—not to mention that the biased way of getting stuck in first impressions is very common also.)
Personally, I’ve adopted the habit of not even trying to talk about singularity stuff to new people who aren’t very bright. (Of course, if they become interested despite this, then they can’t just be completely ignored.)
I thought about that too. But many people outside this community suspect me, as they often state, to be intelligent and educated. And I mainly try to talk to people in the academics. You won’t believe that even I am able to make them think that I’m one of them, up to the point of correcting errors in their calculations (it happened). Many haven’t even heard about Bayesian inference by the way...
The way I introduce people to this is not by telling them about the risks of AGI but rather linking them up to specific articles on lesswrong.com or telling them about how the SIAI tries to develop ethical decision making etc.
I’ve grown up in a family of Jehovah’s Witnesses, I know how to start selling bullshit. Not that the SIAI is bullshit, but I’d never use words like ‘Singularity’ while promoting it to people I don’t know.
Many people know about the transhumanist/singularity fraction already and think it is complete nonsense, so I often can only improve their opinion.
There are people teaching on university level that told me I convinced them that he (EY) is to be taken seriously.
What you state is good evidence that you are not one of those too stupid people I was talking about (even though you have managed to not understand what SIAI is saying very well). Thanks for presenting the evidence, and correcting my suspicion that someone on your level of non-comprehension would usually end up doing more harm than good.
Although I personally don’t care much if I’m called stupid, if I think it is justified, I doubt this attitude is very appealing to most people.
Where do you draw the line between being stupid and simply uneducated or uninformed?
I’ve never read up on their program in the first place. When thinking about turning those comments the OP is based on into a top-level post I have been pondering much longer about the title than the rest of what I said until I became too lazy and simply picked the SIAI as punching bag to direct my questions at. I thought it would sufficiently work to steer some emotions. But after all that was most of what it did accomplish, rather than some answers.
What I really was on about was the attitude of many people here, especially regarding the posts related to the Roko-deletion-incident. I was struck by the apparent impact it had. It was not just considered to be worth sacrificing freedom of speech for it but people, including some working for the SIAI, actually had nightmares and suffered psychological trauma. I think I understood the posts and comments, as some told me over private message after inquiring about my knowledge, but however couldn’t believe that something that far would be considered to be reasonably evidence-based to be worried to such an extent.
But inquiring about that would have turned the attention back to the relevant content. And after all I wanted to find out if such reactions are justified before deciding to spread the content anyway.
You admit you’ve never bothered to read up on what SIAI is about in the first place. Don’t be surprised if people don’t have the best possible attitude if despite this you want them to spend a significant amount of time explaining to you personally the very same content that is already available but you just haven’t bothered to read.
Might as well link again the one page that I recommend as the starting point in getting to know what it is exactly that SIAI argues:
http://singinst.org/riskintro/index.html
I also think it’s weird that you’ve actually donated money to SIAI, despite not having really looked into what it is about and how credible the arguments are. I personally happen to think that SIAI is very much worth supporting, but there doesn’t seem to be any way how you could have known that before making your donations, and so it’s just luck that it actually wasn’t a weird cult that your way of making decisions lead you to give money to.
(And part of the reason I’m being this blunt with you is that I’ve formed the impression that you won’t take it in a very negative way, in the way that many people would. And on a personal level, I actually like you, and think we’d probably get along very well if we were to meet IRL.)
I’ve actually this little crazy conspiracy theory in my head that EY is such a smart fellow that he was able to fool a bunch of nonconformists to make him live of their donations.
Why I donate despite that? I’ve also donated money to Peter Watts getting into the claws of the American justice. Wikipedia, TrueCrypt, the Kahn Academy and many more organisations and people. Why? They make me happy. And there’s lots of cool stuff coming from EY, whether he’s a cult leader or not.
I’d probably be more excited if it turned out to be a cult and donate even more. That be hilarious. On the other hand I suspect Scientology not be to a cult. I think they are just making fun of religion and at the same time are some really selfish bastards who live of the money of people dumb enough to actually think they are serious. If they told me this, I’d join.
SCIENTOLOGY IS DANGEROUS. Scientology is not a joke and joining them is not something to be joked about. The fifth level of precaution is absolutely required in all dealings with the Church of Scientology and its members. A few minutes of research with Google will turn up extraordinarily serious allegations against the Church of Scientology and its top leadership, including allegations of brainwashing, abducting members into slavery in their private navy, framing their critics for crimes, and large-scale espionage against government agencies that might investigate them.
I am a regular Less Wrong commenter, but I’m making this comment anonymously because Scientology has a policy of singling out critics, especially prominent ones but also some simply chosen at random, for harrassment and attacks. They are very clever and vicious in the nature of the attacks they use, which have included libel, abusing the legal system, and framing their targets for crimes they did not commit. When protests are conducted against Scientology, the organizers advise all attendees to wear masks for their own safety, and I believe they are right to do so.
If you reply to this comment or discuss Scientology anywhere on the internet, please protect your anonymity by using a throwaway account. To discourage people from being reckless, I will downvote any comment which mentions Scientology and which looks like it’s tied to a real identity.
You sound more like a Discordian than a Singularitatian.
Not that there’s anything wrong with that.
I had the same idea! It’s also interesting to consider if some discriminating evidence could (realistically) exist in either sense.
I’m pretty sure there are easier ways to make a living off a charity than to invent a cause that’s nowhere near the mainstream and which is likely to be of interest to only a tiny minority.
Admittedly, doing it that way means you won’t have many competitors.....
The basic hypothesis is that AI theorising was already (one of) his main interest/s, and founding SIAI was the easiest path for him to be able to make a living doing the stuff he enjoys full-time.
Eliezer says that AI theorizing became as interesting to him as it has because it is the most effective way for him to help people. Having observed his career (mostly through the net) for ten years, I would assign a very high (.96) probability that the causality actually runs that way rather than his altruism’s being a rationalization for his interest in getting paid for AI theorizing.
Now as to the source of his altruism, I am much less confident, e.g., about which way he would choose if he found himself at a major decision point with large amounts of personal and global expected utility on the line where he had to choose between indelible widespread infamy or even total obscurity and helping people.
Not really useful as evidence against the mighty conspiracy theory, though—one would make identical statements to that effect whether he was honest, consciously deceiving, or anywhere inbetween.
Would you happen to remember an instance of Eliezer making an embarrassing / self-damaging admission when you couldn’t see any reason for him to do so outside of an innate preference for honesty?
How would that constitute evidence against the “mighty conspiracy theory”? Surely Eliezer could have foreseen that someone would ask this question sooner and later, and made some embarrassing / self-damaging admission just to cover himself.
Good point. I didn’t think much about the question, and it should have been obvious that the hypothesis of him simulating honesty is not strictly falsifiable by relying solely on his words.
Ok, new possibility for falsification: before SIAI was founded, a third party offered him a job in AI research that was just as interesting and brought at least as many assorted perks, but he refused because he genuinely thought FAI research was more important. Or for that matter any other scenario under which founding SIAI constituted a net sacrifice for Eliezer when not counting the benefit of potentially averting armageddon.
Quite a bit harder to produce, but that’s par for the course with Xanatos-style conspiracy theories.
Actually, I was responding to your “AI theorising was already (one of) his main interest/s”, not your larger point.
I consider the possibility that Eliezer has intentionally deceived his donors all along as so unlikely as to not be worth discussing.
ADDED. Re-reading parent for the second time, I notice your “whether he was honest, consciously deceiving, or anywhere inbetween” (emphasis mine). So, since you (I now realize) probably were entertaining the possibility that he is “unconsciously deceiving” (i.e., has conveniently fooled himself), let me extend my reply.
People can be scrupulously honest in almost all matters, NihilCredo, and still deceive themselves about their motivations for doing something, so I humbly suggest that even though Eliezer has shown himself willing to issue an image-damaging public recantation when he discovers that something he has published is wrong that is not nearly enough evidence to trust his public statements about his motivations.
What one does instead is look at his decisions. And even more you look at what he is able to stay motivated to do over a long period of time. Consider for example the two years he spent blogging about rationality. This is educational writing or communication and it is extremely good educational communication. No matter how smart the person is, he cannot communicate or teach that effectively without doing a heck of a lot of hard work. And IMO no human being can work that hard for two whole years voluntarily (i.e., without fear of losing something he needs or loves and already has) unless the person is deriving some sort of real human satisfaction from the work. (Even with a very strong “negative” motivation like fear, it is hard to work that hard for 2 years without making yourself sick, and E sure did not look or act sick when I chatted with him at a Sep 2009 meetup.) And this is where the explanation gets complicated, and I want to cut it short.
There are only so many kinds of real human motivation. Scientists of course are usually motivated by the pleasure of discovery, of extending their understanding of the world. Many, perhaps most, scientists are motivated by reputation, for the good opinion of other scientists or the public at large. I find it unlikely however that any combination of those 2 motivations would have been enough for any human being to perform the way E did during his 2 years of “educating through blogging”.
So, to summarize, I have some strong or firm reasons to believe that while he was writing those excellent blog posts, E regularly found pleasure and consequently found motivation in the idea of producing understanding in his readers, and this pleasure is an example of a “friendly impulse” or “altruistic desire” in E (part of the implementation in the human mind of the human capacity for what the evolutionary psychologists call reciprocal altruism).
And I know enough psychology to know that if E is capable of being motivated to extremely hard work by “the friendly impulse” when he started his blogging at age 27, then he was also capable of being motivated in his daydreams and in his career planning by “the friendly impulse” when he was a teenager (which is when he says he saw that AI research is the best way to help people and when he began his interest in AI theorizing). (It is rare for a person to be able to learn (even if they really want to) how to find pleasure (and consequently long-term motivation) from altruism / friendliness if they lacked the capacity in their teens like I did.)
Now I am not saying that E does not derive a lot of pleasure from scientific theorizing (most scientists of his caliber do), but I am saying that I believe his statements that the reason that most of his theorizing is about AI rather than string theory or population genetics is what he says it is.
This is all very condensed and it relies on beliefs of mine that are definitely not settled science, e.g., the belief that the only way a person every voluntarily works as hard as E must have for 2 years is if they find pleasure in the work) but it does explain just a little of the basis for the probability assignment I made in grandparent.
Definitely an interesting comment. Thanks.
I don’t think I find your psychological argument very relevant here. The conspiracy allows—indeed, it makes a cardinal assumption—that Eliezer loves doing what he does, i.e. discussing and spreading ideas about rationality and theorising about AI and futurology; the only proposed dissonance between his statements and his findings would be that he is (whether intentionally or not, see below) overblowing the danger of a near-omnipotent unfriendly AI. And of course, people can be untruthful in one field and still be highly altruist in a hundred others.
Speaking of which, we ended up drifting further from the idea XiXiDu and I were originally entertaining, which was that of a cunning plot to create his dream job. While, only because of his passion for rationality, it would still be interesting if Eliezer were suffering from such a dramatic bias (and it would be downright hilarious if he were truly pulling a fast one), the more such a bias is unconscious and hard to spot, the closer it comes to being a honest mistake, rather than negligence; but it’s not particularly interesting or amusing that someone could have made a honest mistake.
Yes, I am a little embarassed that I took the thread on such a sharp and lengthy tangent. I don’t have time to move my comment though.
Oh, I wouldn’t worry. To paraphrase what I once read being written about HP&MoR, overthinking stuff is pretty much the point of this site.
I can remember several such instances, and I haven’t been following things for as long as rhollerith. There are even a few of them in top-level posts.
Wow. That’s impressive. I think XiXiDu should get some bonus karma points for pulling that off.
Eliezer seems to have run your post through some crude heuristic and incorrectly categorized it. While you did make certain errors that many people have observed, I think you deserved a different response.
At least, Eliezer seemingly not realizing that you are a donor means that his treatment of you doesn’t represent how he treats donors.
Edit: To his credit, Eliezer apologized and admitted to his perceptual misclassification.
It has seemed to me for a while that a number of people will upvote any post that goes against the LW ‘consensus’ position on cryonics/Singularity/Friendliness, so long as it’s not laughably badly written.
I don’t think anything Eliezer can say will change that trend, for obvious reasons.
However, most of us could do better in downvoting badly argued or fatally flawed posts. It amazes me that many of the worst posts here won’t drop below 0 for any stated amount of time, and even then not very far. Docking someone’s karma isn’t going to kill them, folks. Do everyone a favor and use those downvotes.
My post is neither badly argued nor fatally flawed as I’ve mainly been asking questions and not making arguments. But if you think otherwise, why don’t you argue where I am fatally flawed?
My post has not been written to speak out against any ‘consensus’, I agree with the primary conclusions but am skeptic about further chains of reasoning based on those conclusions as I don’t perceive them to be based on firm ground but merely be what follows from previous evidence.
And yes, I’m a lazy bum. I’ve not thought about the OP for more than 10 minutes. It’s actually copy and paste work from previous comments. Hell, what have you expected? A dissertation? Nobody else was asking those questions, someone had to.
I find it difficult to write stuff I don’t believe.
Noted.
I upvoted the original post for:
Stimulating critical discussion of the Less Wrong community—specifically: the beliefs almost unanimously shared, and the negativity towards criticsm; as someone who has found Less Wrong extremely helpful, and would hate to see it descend into groupthink and affiliation signalling.
A question to those who dismiss the OP as merely “noise”: what do you make of the nature of this post?
Stimulating critical discussion of the operating premises of the SIAI; as someone who is considering donating and otherwise contributing. This additionally provides elucidation to those in a state of epistemic limbo regarding the various aspects of FAI and the Singularity.
I am reminded of this passage regarding online communities (source):
As someone who thought the OP was of poor quality, and who has had a very high opinion of SIAI and EY for a long time (and still has), I’ll say that that “Eliezer Yudkowsky facts” was indeed a lot worse. It was the most embarrassing thing I’ve ever read on this site. Most of those jokes aren’t even good.
“Eliezer Yudkowsky facts” is meant to be fun and entertainment. Do you agree that there is a large subjective component to what a person will think is fun, and that different people will be amused by different types of jokes? Obviously many people did find the post amusing (judging from its 47 votes), even if you didn’t. If those jokes were not posted, then something of real value would have been lost.
The situation with XiXiDu’s post’s is different because almost everyone seems to agree that it’s bad, and those who voted it up did so only to “stimulate discussion”. But if they didn’t vote up XiXiDu’s post, it’s quite likely that someone would eventually write up a better post asking similar questions and generating a higher quality discussion, so the outcome would likely be a net improvement. Or alternatively, those who wanted to “stimulate discussion” could have just looked in the LW archives and found all the discussion they could ever hope for.
If almost everyone thought it’s bad I would expect it to have much more downvotes than upvotes, even given the few people who voted it up to “stimulate discussion”. But you probably know more about statistics than I do, so never mind.
Before or after the SIAI build a FAI? I waited half a decade for any of those questions to be asked in the first place.
Right, haven’t thought about that! I’ll be right back reading a few thousand comments to find some transparency.
This is true. You might also be able to think of jokes that aren’t worth making even though a group of people would find then genuinely funny.
I agree with Aleksei about the Facts article.
Can you please explain why you think those jokes shouldn’t have been made? I thought that making fun of authority figures is socially accepted in general, and in this case shows that we don’t take Eliezer too seriously. Do you disagree?
Hey, I said the same, why was he upvoted for it and I downvoted? Oh wait, it’s Wei_Dai, never mind.
Please downvote this comment as I’m adding noise while being hostile to someone who adds valuable insights to the discussion.
You seemed to seriously imply that Eliezer didn’t understand that the “facts” thread was a joke, while actually he was sarcastically joking by hinting at not getting the joke in the comment you replied to. I downvoted the comment to punish stupidity on LW (nothing personal, believe it or not, in other words it’s a one-step decision based on the comment alone and not on impression made by your other comments). Wei didn’t talk about that.
I guess after so many comments implying things I never meant to say I was a bit aggrieved. Never mind.
Making him the subject of a list like that looks plenty serious to me.
Beyond that, I don’t think there’s much that I can say. There’s a certain tone-deafness that’s rubbing me wrong in both the post and in this discussion, but exactly how that works is not something that I know how to convey with a couple of paragraphs of text.
Ok, I think I have an explanation for what’s going on here. Those of us “old hands” who went through the period where LW was OB, and Eliezer and Robin were the only main posters, saw Eliezer as initially having very high status, and considered the “facts” post as a fun way of taking him down a notch or two. Newcomers who arrived after LW became a community blog, on the other hand, don’t have the initial high status in mind, and instead see that post as itself assigning Eliezer a very high status, which they see as unjustified/weird/embarrassing. Makes sense, right?
(Voted parent up from −1, btw. That kind of report seems useful, even if the commenter couldn’t explain why he felt that way.)
I have a theory: all the jokes parse out to “Eliezer is brilliant, and we have a bunch of esoteric in-jokes to show how smart we are”. This isn’t making fun of an authority figure.
This doesn’t mean the article was a bad idea, or that I didn’t think it was funny. I also don’t think it’s strong evidence that LW and SIAI aren’t cults.
ETA: XiXiDu’s comment that this is the community making fun of itself seems correct.
Fact: Evaluating humor about Eliezer Yudkowsky always results in an interplay between levels of meta-humor such that the analysis itself is funny precisely when the original joke isn’t.
They are very good examples of the genre (Chuck Norris-style jokes). I for one could not contain my levity.
I was embarrassed by most of the facts. The one about my holding up a blank sheet of paper and saying “a blank map does not correspond to a blank territory” and thus creating the universe is one I still tell at parties.
That post was meant as a playful muck, it was a joke. It was not meant as a hostile attack. I’ve no idea how you and Aleksei can come to this conclusions about something many people thought was really funny, even outside of the community. That post actually helped to loosen the very stern sentiment of some people regarding you personally and the SIAI.
“Hey, those people are actually able to make fun of themselves, maybe they are not a cult after all...”
What, why are you talking about a hostile attack?
Of course I didn’t feel that it would be that. It’s quite the opposite, it felt to me like communicating an unhealthy air of hero worship.
Then I have been the one to completely misinterpret what you said. Apologize, I’m not good at this.
I’ve said it before the OP but failed miserably:
I should quit now and for some time stop participating on LW. I have to continue with my studies. I was only drawn here by the deletion incident. Replies and that it is fun to to argue have made me babble too much in the past few days.
Back to being lurker. Thanks.
Wow, I thought it was one of the best. By that post I actually introduced a philosopher (who teaches in Sweden), who’s been skeptic about EY, to read up on the MWI sequence and afterwards agree that EY is right.
I like that post—of course, few of the jokes are funny, but you read such a thing for the few gems they do contain. I think of it as hanging a lampshade (warning, TV tropes) on one of the problems with this website.