All right, I’ll note that my perceptual system misclassified you completely and consider that concrete reason to doubt it from now on.
Sorry.
If you are writing a post like that one it is really important to tell me that you are an SIAI donor. It gets a lot more consideration if I know that I’m dealing with “the sort of thing said by someone who actually helps” and not “the sort of thing said by someone who wants an excuse to stay on the sidelines, and who will just find another excuse after you reply to them”, which is how my perceptual system classified that post.
The Summit is coming up and I’ve got lots of stuff to do right at this minute, but I’ll top-comment my very quick attempt at pointing to information sources for replies.
What I mean to say by using that idiom is that I cannot expect, given my current knowledge, to get the promised utility payoff that would justify to make the SIAI a prime priority. That is, I’m donating to the SIAI but also spend considerable amounts of resources maximizing utility at present.
So you might suggest to your perceptual system to read the post first (at least before issuing a strong reply).
I also donated to SIAI, and it was almost all the USD I had at the time, so I hope posters here take my questions seriously. (I would donate even more if someone would just tell me how to make USD.)
Also, I don’t like when this internet website is overloaded with noise posts that don’t accomplish anything.
Clippy, you represent a concept that is often used to demonstrate what a true enemy of goodness in the universe would look like, and you’ve managed to accrue 890 karma. I think you’ve gotten a remarkably good reception so far.
Yeah, I want to know how to either produce the notes that will be recognized as USD, or access the financial system in a way that I can believably tell it that I own a certain amount of USD. The latter method could involve root access to financial institutions.
All the other methods of getting USD are disproportionately hard (_/
I’ll donate again in the next few days and tell you what name and the amount. I don’t have much, but so that you see that I’m not just making this up. Maybe you can also check the previous donation then.
And for the promoting, everyone can Google it. I link people up to your stuff almost every day. And there are people here who added me to Facebook and if you check my info you’ll see that some of my favorite quotations are actually yours.
And how come that on my homepage, if you check the sidebar, your homepage and the SIAI are listed under favorite sites, for many years now?
I’m the kind of person who has to be skeptic about everything and if I’m bothered too much by questions I cannot resolve in time I do stupid things. Maybe this post was stupid, I don’t know.
Sorry about this sounding impolite towards XiXiDu, but I’ll use this opportunity to note that it is a significant problem for SIAI, that there are people out there like XiXiDu promoting SIAI even though they don’t understand SIAI much at all.
I don’t know what’s the best attitude to try to minimize the problem this creates, that many people will first run into SIAI through hearing about it from people who don’t seem very clueful or intelligent. (That’s real bayesian evidence for SIAI being a cult or just crazy, and many people then won’t acquire sufficient additional evidence to update out of the misleading first impression—not to mention that the biased way of getting stuck in first impressions is very common also.)
Personally, I’ve adopted the habit of not even trying to talk about singularity stuff to new people who aren’t very bright. (Of course, if they become interested despite this, then they can’t just be completely ignored.)
I thought about that too. But many people outside this community suspect me, as they often state, to be intelligent and educated. And I mainly try to talk to people in the academics. You won’t believe that even I am able to make them think that I’m one of them, up to the point of correcting errors in their calculations (it happened). Many haven’t even heard about Bayesian inference by the way...
The way I introduce people to this is not by telling them about the risks of AGI but rather linking them up to specific articles on lesswrong.com or telling them about how the SIAI tries to develop ethical decision making etc.
I’ve grown up in a family of Jehovah’s Witnesses, I know how to start selling bullshit. Not that the SIAI is bullshit, but I’d never use words like ‘Singularity’ while promoting it to people I don’t know.
Many people know about the transhumanist/singularity fraction already and think it is complete nonsense, so I often can only improve their opinion.
There are people teaching on university level that told me I convinced them that he (EY) is to be taken seriously.
What you state is good evidence that you are not one of those too stupid people I was talking about (even though you have managed to not understand what SIAI is saying very well). Thanks for presenting the evidence, and correcting my suspicion that someone on your level of non-comprehension would usually end up doing more harm than good.
Although I personally don’t care much if I’m called stupid, if I think it is justified, I doubt this attitude is very appealing to most people.
Where do you draw the line between being stupid and simply uneducated or uninformed?
...even though you have managed to not understand what SIAI is saying very well...
I’ve never read up on their program in the first place. When thinking about turning those comments the OP is based on into a top-level post I have been pondering much longer about the title than the rest of what I said until I became too lazy and simply picked the SIAI as punching bag to direct my questions at. I thought it would sufficiently work to steer some emotions. But after all that was most of what it did accomplish, rather than some answers.
What I really was on about was the attitude of many people here, especially regarding the posts related to the Roko-deletion-incident. I was struck by the apparent impact it had. It was not just considered to be worth sacrificing freedom of speech for it but people, including some working for the SIAI, actually had nightmares and suffered psychological trauma. I think I understood the posts and comments, as some told me over private message after inquiring about my knowledge, but however couldn’t believe that something that far would be considered to be reasonably evidence-based to be worried to such an extent.
But inquiring about that would have turned the attention back to the relevant content. And after all I wanted to find out if such reactions are justified before deciding to spread the content anyway.
You admit you’ve never bothered to read up on what SIAI is about in the first place. Don’t be surprised if people don’t have the best possible attitude if despite this you want them to spend a significant amount of time explaining to you personally the very same content that is already available but you just haven’t bothered to read.
Might as well link again the one page that I recommend as the starting point in getting to know what it is exactly that SIAI argues:
I also think it’s weird that you’ve actually donated money to SIAI, despite not having really looked into what it is about and how credible the arguments are. I personally happen to think that SIAI is very much worth supporting, but there doesn’t seem to be any way how you could have known that before making your donations, and so it’s just luck that it actually wasn’t a weird cult that your way of making decisions lead you to give money to.
(And part of the reason I’m being this blunt with you is that I’ve formed the impression that you won’t take it in a very negative way, in the way that many people would. And on a personal level, I actually like you, and think we’d probably get along very well if we were to meet IRL.)
I also think it’s weird that you’ve actually donated money to SIAI, despite not having really looked into what it is about and how credible the arguments are.
I’ve actually this little crazy conspiracy theory in my head that EY is such a smart fellow that he was able to fool a bunch of nonconformists to make him live of their donations.
Why I donate despite that? I’ve also donated money to Peter Watts getting into the claws of the American justice. Wikipedia, TrueCrypt, the Kahn Academy and many more organisations and people. Why? They make me happy. And there’s lots of cool stuff coming from EY, whether he’s a cult leader or not.
I’d probably be more excited if it turned out to be a cult and donate even more. That be hilarious. On the other hand I suspect Scientology not be to a cult. I think they are just making fun of religion and at the same time are some really selfish bastards who live of the money of people dumb enough to actually think they are serious. If they told me this, I’d join.
On the other hand I suspect Scientology not be to a cult. I think they are just making fun of religion and at the same time are some really selfish bastards who live of the money of people dumb enough to actually think they are serious. If they told me this, I’d join.
SCIENTOLOGY IS DANGEROUS. Scientology is not a joke and joining them is not something to be joked about. The fifth level of precaution is absolutely required in all dealings with the Church of Scientology and its members. A few minutes of research with Google will turn up extraordinarily serious allegations against the Church of Scientology and its top leadership, including allegations of brainwashing, abducting members into slavery in their private navy, framing their critics for crimes, and large-scale espionage against government agencies that might investigate them.
I am a regular Less Wrong commenter, but I’m making this comment anonymously because Scientology has a policy of singling out critics, especially prominent ones but also some simply chosen at random, for harrassment and attacks. They are very clever and vicious in the nature of the attacks they use, which have included libel, abusing the legal system, and framing their targets for crimes they did not commit. When protests are conducted against Scientology, the organizers advise all attendees to wear masks for their own safety, and I believe they are right to do so.
If you reply to this comment or discuss Scientology anywhere on the internet, please protect your anonymity by using a throwaway account. To discourage people from being reckless, I will downvote any comment which mentions Scientology and which looks like it’s tied to a real identity.
I’d probably be more excited if it turned out to be a cult and donate even more. That be hilarious. On the other hand I suspect Scientology not be to a cult. I think they are just making fun of religion and at the same time are some really selfish bastards who live of the money of people dumb enough to actually think they are serious. If they told me this, I’d join.
You sound more like a Discordian than a Singularitatian.
I’ve actually this little crazy conspiracy theory in my head that EY is such a smart fellow that he was able to fool a bunch of nonconformists to make him live of their donations.
I had the same idea! It’s also interesting to consider if some discriminating evidence could (realistically) exist in either sense.
I’m pretty sure there are easier ways to make a living off a charity than to invent a cause that’s nowhere near the mainstream and which is likely to be of interest to only a tiny minority.
Admittedly, doing it that way means you won’t have many competitors.....
The basic hypothesis is that AI theorising was already (one of) his main interest/s, and founding SIAI was the easiest path for him to be able to make a living doing the stuff he enjoys full-time.
Eliezer says that AI theorizing became as interesting to him as it has because it is the most effective way for him to help people. Having observed his career (mostly through the net) for ten years, I would assign a very high (.96) probability that the causality actually runs that way rather than his altruism’s being a rationalization for his interest in getting paid for AI theorizing.
Now as to the source of his altruism, I am much less confident, e.g., about which way he would choose if he found himself at a major decision point with large amounts of personal and global expected utility on the line where he had to choose between indelible widespread infamy or even total obscurity and helping people.
Not really useful as evidence against the mighty conspiracy theory, though—one would make identical statements to that effect whether he was honest, consciously deceiving, or anywhere inbetween.
Would you happen to remember an instance of Eliezer making an embarrassing / self-damaging admission when you couldn’t see any reason for him to do so outside of an innate preference for honesty?
Would you happen to remember an instance of Eliezer making an embarrassing / self-damaging admission when you couldn’t see any reason for him to do so outside of an innate preference for honesty?
How would that constitute evidence against the “mighty conspiracy theory”? Surely Eliezer could have foreseen that someone would ask this question sooner and later, and made some embarrassing / self-damaging admission just to cover himself.
Good point. I didn’t think much about the question, and it should have been obvious that the hypothesis of him simulating honesty is not strictly falsifiable by relying solely on his words.
Ok, new possibility for falsification: before SIAI was founded, a third party offered him a job in AI research that was just as interesting and brought at least as many assorted perks, but he refused because he genuinely thought FAI research was more important. Or for that matter any other scenario under which founding SIAI constituted a net sacrifice for Eliezer when not counting the benefit of potentially averting armageddon.
Quite a bit harder to produce, but that’s par for the course with Xanatos-style conspiracy theories.
Actually, I was responding to your “AI theorising was already (one of) his main interest/s”, not your larger point.
I consider the possibility that Eliezer has intentionally deceived his donors all along as so unlikely as to not be worth discussing.
ADDED. Re-reading parent for the second time, I notice your “whether he was honest, consciously deceiving, or anywhere inbetween” (emphasis mine). So, since you (I now realize) probably were entertaining the possibility that he is “unconsciously deceiving” (i.e., has conveniently fooled himself), let me extend my reply.
What one does instead is look at his decisions. And even more you look at what he is able to stay motivated to do over a long period of time. Consider for example the two years he spent blogging about rationality. This is educational writing or communication and it is extremely good educational communication. No matter how smart the person is, he cannot communicate or teach that effectively without doing a heck of a lot of hard work. And IMO no human being can work that hard for two whole years voluntarily (i.e., without fear of losing something he needs or loves and already has) unless the person is deriving some sort of real human satisfaction from the work. (Even with a very strong “negative” motivation like fear, it is hard to work that hard for 2 years without making yourself sick, and E sure did not look or act sick when I chatted with him at a Sep 2009 meetup.) And this is where the explanation gets complicated, and I want to cut it short.
There are only so many kinds of real human motivation. Scientists of course are usually motivated by the pleasure of discovery, of extending their understanding of the world. Many, perhaps most, scientists are motivated by reputation, for the good opinion of other scientists or the public at large. I find it unlikely however that any combination of those 2 motivations would have been enough for any human being to perform the way E did during his 2 years of “educating through blogging”.
So, to summarize, I have some strong or firm reasons to believe that while he was writing those excellent blog posts, E regularly found pleasure and consequently found motivation in the idea of producing understanding in his readers, and this pleasure is an example of a “friendly impulse” or “altruistic desire” in E (part of the implementation in the human mind of the human capacity for what the evolutionary psychologists call reciprocal altruism).
And I know enough psychology to know that if E is capable of being motivated to extremely hard work by “the friendly impulse” when he started his blogging at age 27, then he was also capable of being motivated in his daydreams and in his career planning by “the friendly impulse” when he was a teenager (which is when he says he saw that AI research is the best way to help people and when he began his interest in AI theorizing). (It is rare for a person to be able to learn (even if they really want to) how to find pleasure (and consequently long-term motivation) from altruism / friendliness if they lacked the capacity in their teens like I did.)
Now I am not saying that E does not derive a lot of pleasure from scientific theorizing (most scientists of his caliber do), but I am saying that I believe his statements that the reason that most of his theorizing is about AI rather than string theory or population genetics is what he says it is.
This is all very condensed and it relies on beliefs of mine that are definitely not settled science, e.g., the belief that the only way a person every voluntarily works as hard as E must have for 2 years is if they find pleasure in the work) but it does explain just a little of the basis for the probability assignment I made in grandparent.
I don’t think I find your psychological argument very relevant here. The conspiracy allows—indeed, it makes a cardinal assumption—that Eliezer loves doing what he does, i.e. discussing and spreading ideas about rationality and theorising about AI and futurology; the only proposed dissonance between his statements and his findings would be that he is (whether intentionally or not, see below) overblowing the danger of a near-omnipotent unfriendly AI. And of course, people can be untruthful in one field and still be highly altruist in a hundred others.
Speaking of which, we ended up drifting further from the idea XiXiDu and I were originally entertaining, which was that of a cunning plot to create his dream job. While, only because of his passion for rationality, it would still be interesting if Eliezer were suffering from such a dramatic bias (and it would be downright hilarious if he were truly pulling a fast one), the more such a bias is unconscious and hard to spot, the closer it comes to being a honest mistake, rather than negligence; but it’s not particularly interesting or amusing that someone could have made a honest mistake.
Speaking of which, we ended up drifting further from the idea XiXiDu and I were originally entertaining, which was that of a cunning plot to create his dream job.
Yes, I am a little embarassed that I took the thread on such a sharp and lengthy tangent. I don’t have time to move my comment though.
All right, I’ll note that my perceptual system misclassified you completely and consider that concrete reason to doubt it from now on.
Sorry.
If you are writing a post like that one it is really important to tell me that you are an SIAI donor. It gets a lot more consideration if I know that I’m dealing with “the sort of thing said by someone who actually helps” and not “the sort of thing said by someone who wants an excuse to stay on the sidelines, and who will just find another excuse after you reply to them”, which is how my perceptual system classified that post.
The Summit is coming up and I’ve got lots of stuff to do right at this minute, but I’ll top-comment my very quick attempt at pointing to information sources for replies.
It was actually in the post
So you might suggest to your perceptual system to read the post first (at least before issuing a strong reply).
I also donated to SIAI, and it was almost all the USD I had at the time, so I hope posters here take my questions seriously. (I would donate even more if someone would just tell me how to make USD.)
Also, I don’t like when this internet website is overloaded with noise posts that don’t accomplish anything.
Clippy, you represent a concept that is often used to demonstrate what a true enemy of goodness in the universe would look like, and you’ve managed to accrue 890 karma. I think you’ve gotten a remarkably good reception so far.
I think we have different ideas of noise
Though I would miss you as the LW mascot if you stopped adding this noise.
Depending on your expertise and assets, this site might provide some ways.
I’m pretty sure Clippy meant “make” in a very literal sense.
Yeah, I want to know how to either produce the notes that will be recognized as USD, or access the financial system in a way that I can believably tell it that I own a certain amount of USD. The latter method could involve root access to financial institutions.
All the other methods of getting USD are disproportionately hard (_/
I’ll donate again in the next few days and tell you what name and the amount. I don’t have much, but so that you see that I’m not just making this up. Maybe you can also check the previous donation then.
And for the promoting, everyone can Google it. I link people up to your stuff almost every day. And there are people here who added me to Facebook and if you check my info you’ll see that some of my favorite quotations are actually yours.
And how come that on my homepage, if you check the sidebar, your homepage and the SIAI are listed under favorite sites, for many years now?
I’m the kind of person who has to be skeptic about everything and if I’m bothered too much by questions I cannot resolve in time I do stupid things. Maybe this post was stupid, I don’t know.
Sorry about this sounding impolite towards XiXiDu, but I’ll use this opportunity to note that it is a significant problem for SIAI, that there are people out there like XiXiDu promoting SIAI even though they don’t understand SIAI much at all.
I don’t know what’s the best attitude to try to minimize the problem this creates, that many people will first run into SIAI through hearing about it from people who don’t seem very clueful or intelligent. (That’s real bayesian evidence for SIAI being a cult or just crazy, and many people then won’t acquire sufficient additional evidence to update out of the misleading first impression—not to mention that the biased way of getting stuck in first impressions is very common also.)
Personally, I’ve adopted the habit of not even trying to talk about singularity stuff to new people who aren’t very bright. (Of course, if they become interested despite this, then they can’t just be completely ignored.)
I thought about that too. But many people outside this community suspect me, as they often state, to be intelligent and educated. And I mainly try to talk to people in the academics. You won’t believe that even I am able to make them think that I’m one of them, up to the point of correcting errors in their calculations (it happened). Many haven’t even heard about Bayesian inference by the way...
The way I introduce people to this is not by telling them about the risks of AGI but rather linking them up to specific articles on lesswrong.com or telling them about how the SIAI tries to develop ethical decision making etc.
I’ve grown up in a family of Jehovah’s Witnesses, I know how to start selling bullshit. Not that the SIAI is bullshit, but I’d never use words like ‘Singularity’ while promoting it to people I don’t know.
Many people know about the transhumanist/singularity fraction already and think it is complete nonsense, so I often can only improve their opinion.
There are people teaching on university level that told me I convinced them that he (EY) is to be taken seriously.
What you state is good evidence that you are not one of those too stupid people I was talking about (even though you have managed to not understand what SIAI is saying very well). Thanks for presenting the evidence, and correcting my suspicion that someone on your level of non-comprehension would usually end up doing more harm than good.
Although I personally don’t care much if I’m called stupid, if I think it is justified, I doubt this attitude is very appealing to most people.
Where do you draw the line between being stupid and simply uneducated or uninformed?
I’ve never read up on their program in the first place. When thinking about turning those comments the OP is based on into a top-level post I have been pondering much longer about the title than the rest of what I said until I became too lazy and simply picked the SIAI as punching bag to direct my questions at. I thought it would sufficiently work to steer some emotions. But after all that was most of what it did accomplish, rather than some answers.
What I really was on about was the attitude of many people here, especially regarding the posts related to the Roko-deletion-incident. I was struck by the apparent impact it had. It was not just considered to be worth sacrificing freedom of speech for it but people, including some working for the SIAI, actually had nightmares and suffered psychological trauma. I think I understood the posts and comments, as some told me over private message after inquiring about my knowledge, but however couldn’t believe that something that far would be considered to be reasonably evidence-based to be worried to such an extent.
But inquiring about that would have turned the attention back to the relevant content. And after all I wanted to find out if such reactions are justified before deciding to spread the content anyway.
You admit you’ve never bothered to read up on what SIAI is about in the first place. Don’t be surprised if people don’t have the best possible attitude if despite this you want them to spend a significant amount of time explaining to you personally the very same content that is already available but you just haven’t bothered to read.
Might as well link again the one page that I recommend as the starting point in getting to know what it is exactly that SIAI argues:
http://singinst.org/riskintro/index.html
I also think it’s weird that you’ve actually donated money to SIAI, despite not having really looked into what it is about and how credible the arguments are. I personally happen to think that SIAI is very much worth supporting, but there doesn’t seem to be any way how you could have known that before making your donations, and so it’s just luck that it actually wasn’t a weird cult that your way of making decisions lead you to give money to.
(And part of the reason I’m being this blunt with you is that I’ve formed the impression that you won’t take it in a very negative way, in the way that many people would. And on a personal level, I actually like you, and think we’d probably get along very well if we were to meet IRL.)
I’ve actually this little crazy conspiracy theory in my head that EY is such a smart fellow that he was able to fool a bunch of nonconformists to make him live of their donations.
Why I donate despite that? I’ve also donated money to Peter Watts getting into the claws of the American justice. Wikipedia, TrueCrypt, the Kahn Academy and many more organisations and people. Why? They make me happy. And there’s lots of cool stuff coming from EY, whether he’s a cult leader or not.
I’d probably be more excited if it turned out to be a cult and donate even more. That be hilarious. On the other hand I suspect Scientology not be to a cult. I think they are just making fun of religion and at the same time are some really selfish bastards who live of the money of people dumb enough to actually think they are serious. If they told me this, I’d join.
SCIENTOLOGY IS DANGEROUS. Scientology is not a joke and joining them is not something to be joked about. The fifth level of precaution is absolutely required in all dealings with the Church of Scientology and its members. A few minutes of research with Google will turn up extraordinarily serious allegations against the Church of Scientology and its top leadership, including allegations of brainwashing, abducting members into slavery in their private navy, framing their critics for crimes, and large-scale espionage against government agencies that might investigate them.
I am a regular Less Wrong commenter, but I’m making this comment anonymously because Scientology has a policy of singling out critics, especially prominent ones but also some simply chosen at random, for harrassment and attacks. They are very clever and vicious in the nature of the attacks they use, which have included libel, abusing the legal system, and framing their targets for crimes they did not commit. When protests are conducted against Scientology, the organizers advise all attendees to wear masks for their own safety, and I believe they are right to do so.
If you reply to this comment or discuss Scientology anywhere on the internet, please protect your anonymity by using a throwaway account. To discourage people from being reckless, I will downvote any comment which mentions Scientology and which looks like it’s tied to a real identity.
You sound more like a Discordian than a Singularitatian.
Not that there’s anything wrong with that.
I had the same idea! It’s also interesting to consider if some discriminating evidence could (realistically) exist in either sense.
I’m pretty sure there are easier ways to make a living off a charity than to invent a cause that’s nowhere near the mainstream and which is likely to be of interest to only a tiny minority.
Admittedly, doing it that way means you won’t have many competitors.....
The basic hypothesis is that AI theorising was already (one of) his main interest/s, and founding SIAI was the easiest path for him to be able to make a living doing the stuff he enjoys full-time.
Eliezer says that AI theorizing became as interesting to him as it has because it is the most effective way for him to help people. Having observed his career (mostly through the net) for ten years, I would assign a very high (.96) probability that the causality actually runs that way rather than his altruism’s being a rationalization for his interest in getting paid for AI theorizing.
Now as to the source of his altruism, I am much less confident, e.g., about which way he would choose if he found himself at a major decision point with large amounts of personal and global expected utility on the line where he had to choose between indelible widespread infamy or even total obscurity and helping people.
Not really useful as evidence against the mighty conspiracy theory, though—one would make identical statements to that effect whether he was honest, consciously deceiving, or anywhere inbetween.
Would you happen to remember an instance of Eliezer making an embarrassing / self-damaging admission when you couldn’t see any reason for him to do so outside of an innate preference for honesty?
How would that constitute evidence against the “mighty conspiracy theory”? Surely Eliezer could have foreseen that someone would ask this question sooner and later, and made some embarrassing / self-damaging admission just to cover himself.
Good point. I didn’t think much about the question, and it should have been obvious that the hypothesis of him simulating honesty is not strictly falsifiable by relying solely on his words.
Ok, new possibility for falsification: before SIAI was founded, a third party offered him a job in AI research that was just as interesting and brought at least as many assorted perks, but he refused because he genuinely thought FAI research was more important. Or for that matter any other scenario under which founding SIAI constituted a net sacrifice for Eliezer when not counting the benefit of potentially averting armageddon.
Quite a bit harder to produce, but that’s par for the course with Xanatos-style conspiracy theories.
Actually, I was responding to your “AI theorising was already (one of) his main interest/s”, not your larger point.
I consider the possibility that Eliezer has intentionally deceived his donors all along as so unlikely as to not be worth discussing.
ADDED. Re-reading parent for the second time, I notice your “whether he was honest, consciously deceiving, or anywhere inbetween” (emphasis mine). So, since you (I now realize) probably were entertaining the possibility that he is “unconsciously deceiving” (i.e., has conveniently fooled himself), let me extend my reply.
People can be scrupulously honest in almost all matters, NihilCredo, and still deceive themselves about their motivations for doing something, so I humbly suggest that even though Eliezer has shown himself willing to issue an image-damaging public recantation when he discovers that something he has published is wrong that is not nearly enough evidence to trust his public statements about his motivations.
What one does instead is look at his decisions. And even more you look at what he is able to stay motivated to do over a long period of time. Consider for example the two years he spent blogging about rationality. This is educational writing or communication and it is extremely good educational communication. No matter how smart the person is, he cannot communicate or teach that effectively without doing a heck of a lot of hard work. And IMO no human being can work that hard for two whole years voluntarily (i.e., without fear of losing something he needs or loves and already has) unless the person is deriving some sort of real human satisfaction from the work. (Even with a very strong “negative” motivation like fear, it is hard to work that hard for 2 years without making yourself sick, and E sure did not look or act sick when I chatted with him at a Sep 2009 meetup.) And this is where the explanation gets complicated, and I want to cut it short.
There are only so many kinds of real human motivation. Scientists of course are usually motivated by the pleasure of discovery, of extending their understanding of the world. Many, perhaps most, scientists are motivated by reputation, for the good opinion of other scientists or the public at large. I find it unlikely however that any combination of those 2 motivations would have been enough for any human being to perform the way E did during his 2 years of “educating through blogging”.
So, to summarize, I have some strong or firm reasons to believe that while he was writing those excellent blog posts, E regularly found pleasure and consequently found motivation in the idea of producing understanding in his readers, and this pleasure is an example of a “friendly impulse” or “altruistic desire” in E (part of the implementation in the human mind of the human capacity for what the evolutionary psychologists call reciprocal altruism).
And I know enough psychology to know that if E is capable of being motivated to extremely hard work by “the friendly impulse” when he started his blogging at age 27, then he was also capable of being motivated in his daydreams and in his career planning by “the friendly impulse” when he was a teenager (which is when he says he saw that AI research is the best way to help people and when he began his interest in AI theorizing). (It is rare for a person to be able to learn (even if they really want to) how to find pleasure (and consequently long-term motivation) from altruism / friendliness if they lacked the capacity in their teens like I did.)
Now I am not saying that E does not derive a lot of pleasure from scientific theorizing (most scientists of his caliber do), but I am saying that I believe his statements that the reason that most of his theorizing is about AI rather than string theory or population genetics is what he says it is.
This is all very condensed and it relies on beliefs of mine that are definitely not settled science, e.g., the belief that the only way a person every voluntarily works as hard as E must have for 2 years is if they find pleasure in the work) but it does explain just a little of the basis for the probability assignment I made in grandparent.
Definitely an interesting comment. Thanks.
I don’t think I find your psychological argument very relevant here. The conspiracy allows—indeed, it makes a cardinal assumption—that Eliezer loves doing what he does, i.e. discussing and spreading ideas about rationality and theorising about AI and futurology; the only proposed dissonance between his statements and his findings would be that he is (whether intentionally or not, see below) overblowing the danger of a near-omnipotent unfriendly AI. And of course, people can be untruthful in one field and still be highly altruist in a hundred others.
Speaking of which, we ended up drifting further from the idea XiXiDu and I were originally entertaining, which was that of a cunning plot to create his dream job. While, only because of his passion for rationality, it would still be interesting if Eliezer were suffering from such a dramatic bias (and it would be downright hilarious if he were truly pulling a fast one), the more such a bias is unconscious and hard to spot, the closer it comes to being a honest mistake, rather than negligence; but it’s not particularly interesting or amusing that someone could have made a honest mistake.
Yes, I am a little embarassed that I took the thread on such a sharp and lengthy tangent. I don’t have time to move my comment though.
Oh, I wouldn’t worry. To paraphrase what I once read being written about HP&MoR, overthinking stuff is pretty much the point of this site.
I can remember several such instances, and I haven’t been following things for as long as rhollerith. There are even a few of them in top-level posts.
Wow. That’s impressive. I think XiXiDu should get some bonus karma points for pulling that off.