Maybe it’s because his brain is so large that my mirror neurons have to fire three times faster to compensate, but I always get so frustrated when watching Eliezer discussing things with non-SIAI people. It’s almost kinda painful to watch, because even though I wish someone would come along and pwn Eliezer in an argument, it never ever happens because everyone is more wrong than him, and I have to sit there and listen to them fail in such predictably irrational ways. Seriously, Eliezer is smart, but there have to be some academics out there that can point to at least one piece of Eliezer’s fortress of beliefs and find a potentially weak spot. Right? Do you know how epistemically distressing it is to have learned half the things you know from one person who keeps on getting proven right? That’s not supposed to happen! Grarghhhhhh. (Runs off to read the Two Cult Koans.) (Remembers Eliezer wrote those, too.) (God dammit.)
(And as long as I’m being cultish, HOW DEAR PEOPLE CALL OUR FEARLESS LEADER ‘YUDKOWSKI’?!?!??!? IT COMPLETELY RUINS THE SYMMETRY OF THE ETERNAL DOUBLE ’Y’S! AHHH! But seriously, it kinda annoys me in a way that most trolling doesn’t.)
It reminds me of when Richard Dawkins was doing a bunch of interviews and discussions to promote his then-latest book The God Delusion. It was kind of irritating to hear the people he was talking with failing again and again in the same predictable ways, raising the same dumb points every time. And you could tell that Dawkins was sick of it, too. The few times when someone said something surprising, something that might force him to change his mind about something (even a minor point), his face lit up and his voice took on an excited tone. And when he was particularly uncertain about something, he said so.
People accused him of being arrogant and unwilling to change his mind; the problem is that the people he was arguing with were just so piteously wrong that of course he’s not going to change his mind from talking with them. It’s funny, because one of the things I really like about Dawkins is that he’s genuinely respectful in discussions with other people. Sometimes barbed, but always fundamentally respectful. When the other person says something, he won’t ignore it or talk past them, and he assumes (often wrongly) that whoever he’s speaking with is intelligent enough and sane enough to handle a lack of sugarcoating.
And of course, all this led to accusations of cultishness, for exactly the same reasons that are making you uncomfortable.
Maybe it’s because his brain is so large that my mirror neurons have to fire three times faster to compensate, but I always get so frustrated when watching Eliezer discussing things with non-SIAI people.
Start with a bit of LW’s own “specialized cult jargon” (I kid, really!)… specifically the idea of inferential distance.
Now imagine formalizing this concept more concretely than you get with story-based hand waving, so that it was more quantitative—with parametrized shades of grey instead of simply being “relevant” or “not relevant” to a given situation. Perhaps it could work as a quantitative comparison between two people who could potentially Aumann update with each other, so that “ID(Alice,Bob) == 0 bits” when Alice knows everything Bob knowsand they already believe exactly the same thing, and can’t improve their maps by updating about anything with each other. If its 1 bit then perhaps a single “yes/no Q&A” will be sufficient to bring them into alignment. Larger and larger values imply that they have more evidence (and/or more surprising evidence) to share.
(A simple real world proxy for ID(P1,P2) might be words read or heard by P1 that P2 wrote or spoke. The naive conversion from words to bits would then be to multiply words by ~10 to get bits of information while crossing your fingers and hoping that every word was a novel report of evidence rather than a re-summarization of basically the same information that might let evidential double-counting sneak in the back door. So maybe “ID(Alice,Bob) == 50 bits” means there are five perfectly chosen words that Bob could say to let Alice sync with him?)
Now consider naively (IE imagine that everyone is a baseline human operating mostly on folk wisdom) that Alice and Bob are in a debate being judged by Jim where Jim is forced to judge in favor of one or the other debater, but not both or neither. Given this background information, H, what do you think of the specific probability estimate:
PROB ( J judges for A | H and ID(J,A) < ID(J,B) )
If this is 0.5 then the concept of inferential distance gives no special predictive power about about how Jim will judge. I think this is unlikely, however, given what I suspect about the kinds of mistakes Alice and Bob will make (assuming things intelligible to themselves are intelligible to everyone) and the kinds of mistakes that Jim will make (thinking that if something isn’t transparently obvious then whatever was said is just wrong). My guess would be that Jim would judge in favor of Alice more often, simply because he already deeply understands more of what she says in the course of the debate.
So… I think the critical question to ask is what evidence from the world might Robert Wright have talked about if he hadn’t been wrongfooted when he was pulled into Eliezer’s unfamiliar frameworks for describing optimization processes and for doing expectation-based-argumentation (that you’re already familiar with but that Robert presumably hasn’t read up on).
In point of fact, Robert has published several books with lots of evidence even if he isn’t good at defending himself from Eliezer’s rhetorical jujitsu. Basically none of the contents of his books came out because, although Robert offered helpfully leading questions about Eliezer’s area of specialization (which Eliezer complimented him on—I think maybe misunderstanding his basic-conversational-generosity for agreement-and-hence-intelligence) Eliezer didn’t reciprocate which meant that the video audience didn’t get to see Robert’s specialist knowledge.
The non-zero-sum dynamic, Wright says, is the driving force that has shaped history from the very beginnings of life, giving rise to increasing social complexity, technological innovation and, eventually, the Internet. From Polynesian chiefdoms and North America’s Shoshone culture to the depths of the Mongol Empire, Wright plunders world history for evidence to show that the so-called Information Age is simply part of a long-term trend. Globalization, he points out, has been around since Assyrian traders opened for business in the second millennium B.C. Even the newfangled phenomenon of “narrowcasting” was anticipated, he claims, when the costs of print publishing dropped in the 15th century and spawned a flurry of niche-oriented publications. Occasionally, Wright’s use of modish terminology can seem glib: feudal societies benefited from a “fractal” structure of nested polities, world culture has always been “fault-tolerant” and today’s societies are like a “giant multicultural brain.” Despite the game-theory jargon, however, this book sends an important message that, as human beings make moral progress, history, in its broadest outlines, is getting better all the time.
This sounds to me like a lot of non-fictional evidence. My guess is that Wright is ultimately just more interested in the Invisible Hand than in Azathoth and sees the one “deity” as being more benevolent than the other. If I generously misinterpret him as claiming this, I notice that I’m already willing to believe this because Azathoth seems kind of scary and horrifying to me. If I imagine more evidence this way I’m more inclined to believe it…
So expect that if the conversation in the video had been more about “cooperative truth seeking” than about “debate winning”, then Robert’s would have said something and justified it in a way that improved my thinking.
I think a lot of what’s scary about many real world epistemic failure modes is not that they are full of gross logical fallacies, or involve wearing silly clothes, or get you to work on truly positive “public goods”, but that that they deflect you from acquiring certain kinds of evidence without your even noticing it.
Why must you ruin my self-conscious countersignalling with good epistemology?!
But seriously… Ack! Jennifer, you’re brilliant. I dunno what they put in the water at that CCS place. Would you accept me as your apprentice? I hear tell you have a startup idea. I can’t code, but I live very cheaply and can cheerfully do lots of menial tasks and errands of all kind, from in-the-field market research to buying donuts to washing dishes to answering customer questions and everything else. I’m versatile, energetic and a wicked good rationalist. And I feel that working for you even for a short while would significantly build my understanding of social epistemology and epistemology generally, helping me in my quest to Save the World. Doesn’t that sound like a totally awesome idea? :D
Your compliments are appreciated but, I suspect, unwarranted :-P
I’m not saying “definitely no” and I think it would be cool to work with you. But also you should probably reconsider the offer because I think the right question (tragically?) is not so much “Can I work with you to somehow learn your wisdom by osmosis?” but “Where are the practice grounds for the insight just displayed?” My working theory of “intellectual efficacy” is that it mostly comes from practice.
Following this theory, if you’re simply aiming for educational efficiency of the sort that was applied here, you could do much worse than getting some practice at competitive inter-collegiate policy debate (sometimes called CEDA or NDT depending on the region of the US).
I would attribute my insight here not to “something in the water” at the CCS (the College of Creative Studies at UCSB, which for the record, I just hung out at because that’s where my friends were), but to experiences before that on a college debate team in a two year program that included a debate tournament approximately every third weekend and about 10 hours per week in a college library doing research in preparation for said tournaments.
Here is a partial list of four year colleges that have policy debate teams.
If you were going to go for the best possible debate experience in the U.S. I’d estimate that the best thing to do would be to find a school that was valuable for other reasons and where (1) the head coach’s favorite event is CEDA/NDT (2) the ((debate program budget)/debater) value is high. The funding is important because practical things like a room just for the debate team, travel/food/hotel subsidies are important for filling out a debate team and giving them a sense of community and the size and quality of the team will be a large source of the value of the experience. You might also try to maximize the “tournaments per team member per year” which might vary from school to school based on the costs of travel given the school’s location.
The only major warning with this suggestion, is that a lot of the value of learning to debate rigorously is just that you’ll pick up library skills, policy debate theory, the ability to notice (and produce) debating tricks on the fly, and confidence speaking in front of an audience. Learning debate to practice rationality is kind of like learning to knife fight in order to practice saving people. The skill might have uses in the target domain, but they are definitely not the same thing.
(Though now that I spell out the warning, it might work as a vote for being paid to work in a startup where calculating semi-autonomy is encouraged rather than paying for school in pursuit of theoretically useful ideas? Hmmm...)
Bear in mind that, like many good works of pop science, the vast majority of what the Sequences present is other people’s ideas; I’m much more confident of the value of those ideas than of the parts that are original to Eliezer.
And who filtered that particular and exceptionally coherent set of “other people’s ideas” out of a vastly larger total set of ideas? Who stated them in (for the most part) clear anti-jargon? I would not even go into the neighborhood of being dismissive of such a feat.
I don’t mean to be dismissive at all—leaving aside original content like the FAI problem, the synthesis that the Sequences represent is a major achievement, and one that contributes to making the clarity of writing possible.
There’s not much he could be proven wrong about. What EY mainly accomplished is to put the right pieces, that have already been out there before him, together and create a coherent framework.
But since I’ve only read maybe 5% of LW I might be wrong. Is there something unique that stems from EY?
Another problem is that what EY is saying is sufficiently vague so that you cannot argue with it if you do not doubt some fundamental attributes of reality.
I’m not trying to discredit EY. I actually don’t know of any other person that comes even close to his mesh of beliefs. To the extent that I’m much more relaxed since I know about him for that if I was going to die there’s everything and much more I ever came up with contained inside EY’ mind :-)
Anyway, I can’t help and often muse about the possibility that EY is so much smarter that he actually created the biggest scam ever around the likelihood of uFAI to live of donations by a bunch of nonconformists. - “Let’s do what the Raelians do! Let’s add some nonsense to this meme!”
Do you know how epistemically distressing it is to have learned half the things you know from one person who keeps on getting proven right?
Yeah, huge red flag. I’ll also note that reading Eliezer’s stuff made me feel like I got to extend my beliefs in the same direction away from mainstream that they were already skewed, which is probably why I was extremely receptive to it.
Even though I’ve learned a lot, I don’t get to congratulate myself for a real Mind Change.
There are not very many seriously written-up position statements from Eliezer. So, it probably doesn’t represent a very attractive target for “academics” to attack.
There are a couple of papers about the possibility of THE END OF THE WORLD. That is an unconvential academic subject—partly because no instances of this have ever been observed.
Maybe it’s because his brain is so large that my mirror neurons have to fire three times faster to compensate, but I always get so frustrated when watching Eliezer discussing things with non-SIAI people. It’s almost kinda painful to watch, because even though I wish someone would come along and pwn Eliezer in an argument, it never ever happens because everyone is more wrong than him, and I have to sit there and listen to them fail in such predictably irrational ways. Seriously, Eliezer is smart, but there have to be some academics out there that can point to at least one piece of Eliezer’s fortress of beliefs and find a potentially weak spot. Right? Do you know how epistemically distressing it is to have learned half the things you know from one person who keeps on getting proven right? That’s not supposed to happen! Grarghhhhhh. (Runs off to read the Two Cult Koans.) (Remembers Eliezer wrote those, too.) (God dammit.)
(And as long as I’m being cultish, HOW DEAR PEOPLE CALL OUR FEARLESS LEADER ‘YUDKOWSKI’?!?!??!? IT COMPLETELY RUINS THE SYMMETRY OF THE ETERNAL DOUBLE ’Y’S! AHHH! But seriously, it kinda annoys me in a way that most trolling doesn’t.)
It reminds me of when Richard Dawkins was doing a bunch of interviews and discussions to promote his then-latest book The God Delusion. It was kind of irritating to hear the people he was talking with failing again and again in the same predictable ways, raising the same dumb points every time. And you could tell that Dawkins was sick of it, too. The few times when someone said something surprising, something that might force him to change his mind about something (even a minor point), his face lit up and his voice took on an excited tone. And when he was particularly uncertain about something, he said so.
People accused him of being arrogant and unwilling to change his mind; the problem is that the people he was arguing with were just so piteously wrong that of course he’s not going to change his mind from talking with them. It’s funny, because one of the things I really like about Dawkins is that he’s genuinely respectful in discussions with other people. Sometimes barbed, but always fundamentally respectful. When the other person says something, he won’t ignore it or talk past them, and he assumes (often wrongly) that whoever he’s speaking with is intelligent enough and sane enough to handle a lack of sugarcoating.
And of course, all this led to accusations of cultishness, for exactly the same reasons that are making you uncomfortable.
Start with a bit of LW’s own “specialized cult jargon” (I kid, really!)… specifically the idea of inferential distance.
Now imagine formalizing this concept more concretely than you get with story-based hand waving, so that it was more quantitative—with parametrized shades of grey instead of simply being “relevant” or “not relevant” to a given situation. Perhaps it could work as a quantitative comparison between two people who could potentially Aumann update with each other, so that “ID(Alice,Bob) == 0 bits” when Alice knows everything Bob knowsand they already believe exactly the same thing, and can’t improve their maps by updating about anything with each other. If its 1 bit then perhaps a single “yes/no Q&A” will be sufficient to bring them into alignment. Larger and larger values imply that they have more evidence (and/or more surprising evidence) to share.
(A simple real world proxy for ID(P1,P2) might be words read or heard by P1 that P2 wrote or spoke. The naive conversion from words to bits would then be to multiply words by ~10 to get bits of information while crossing your fingers and hoping that every word was a novel report of evidence rather than a re-summarization of basically the same information that might let evidential double-counting sneak in the back door. So maybe “ID(Alice,Bob) == 50 bits” means there are five perfectly chosen words that Bob could say to let Alice sync with him?)
Now consider naively (IE imagine that everyone is a baseline human operating mostly on folk wisdom) that Alice and Bob are in a debate being judged by Jim where Jim is forced to judge in favor of one or the other debater, but not both or neither. Given this background information, H, what do you think of the specific probability estimate:
PROB ( J judges for A | H and ID(J,A) < ID(J,B) )
If this is 0.5 then the concept of inferential distance gives no special predictive power about about how Jim will judge. I think this is unlikely, however, given what I suspect about the kinds of mistakes Alice and Bob will make (assuming things intelligible to themselves are intelligible to everyone) and the kinds of mistakes that Jim will make (thinking that if something isn’t transparently obvious then whatever was said is just wrong). My guess would be that Jim would judge in favor of Alice more often, simply because he already deeply understands more of what she says in the course of the debate.
So… I think the critical question to ask is what evidence from the world might Robert Wright have talked about if he hadn’t been wrongfooted when he was pulled into Eliezer’s unfamiliar frameworks for describing optimization processes and for doing expectation-based-argumentation (that you’re already familiar with but that Robert presumably hasn’t read up on).
In point of fact, Robert has published several books with lots of evidence even if he isn’t good at defending himself from Eliezer’s rhetorical jujitsu. Basically none of the contents of his books came out because, although Robert offered helpfully leading questions about Eliezer’s area of specialization (which Eliezer complimented him on—I think maybe misunderstanding his basic-conversational-generosity for agreement-and-hence-intelligence) Eliezer didn’t reciprocate which meant that the video audience didn’t get to see Robert’s specialist knowledge.
Here is a bit from Amazon’s quote of Publisher’s Weekly review of Robert’s book “Nonzero” describing the kinds of things Robert could have been talking about if Eliezer had “played along for the sake of argument” before going into attack mode:
This sounds to me like a lot of non-fictional evidence. My guess is that Wright is ultimately just more interested in the Invisible Hand than in Azathoth and sees the one “deity” as being more benevolent than the other. If I generously misinterpret him as claiming this, I notice that I’m already willing to believe this because Azathoth seems kind of scary and horrifying to me. If I imagine more evidence this way I’m more inclined to believe it…
So expect that if the conversation in the video had been more about “cooperative truth seeking” than about “debate winning”, then Robert’s would have said something and justified it in a way that improved my thinking.
I think a lot of what’s scary about many real world epistemic failure modes is not that they are full of gross logical fallacies, or involve wearing silly clothes, or get you to work on truly positive “public goods”, but that that they deflect you from acquiring certain kinds of evidence without your even noticing it.
Why must you ruin my self-conscious countersignalling with good epistemology?!
But seriously… Ack! Jennifer, you’re brilliant. I dunno what they put in the water at that CCS place. Would you accept me as your apprentice? I hear tell you have a startup idea. I can’t code, but I live very cheaply and can cheerfully do lots of menial tasks and errands of all kind, from in-the-field market research to buying donuts to washing dishes to answering customer questions and everything else. I’m versatile, energetic and a wicked good rationalist. And I feel that working for you even for a short while would significantly build my understanding of social epistemology and epistemology generally, helping me in my quest to Save the World. Doesn’t that sound like a totally awesome idea? :D
Your compliments are appreciated but, I suspect, unwarranted :-P
I’m not saying “definitely no” and I think it would be cool to work with you. But also you should probably reconsider the offer because I think the right question (tragically?) is not so much “Can I work with you to somehow learn your wisdom by osmosis?” but “Where are the practice grounds for the insight just displayed?” My working theory of “intellectual efficacy” is that it mostly comes from practice.
Following this theory, if you’re simply aiming for educational efficiency of the sort that was applied here, you could do much worse than getting some practice at competitive inter-collegiate policy debate (sometimes called CEDA or NDT depending on the region of the US).
I would attribute my insight here not to “something in the water” at the CCS (the College of Creative Studies at UCSB, which for the record, I just hung out at because that’s where my friends were), but to experiences before that on a college debate team in a two year program that included a debate tournament approximately every third weekend and about 10 hours per week in a college library doing research in preparation for said tournaments.
Here is a partial list of four year colleges that have policy debate teams.
If you were going to go for the best possible debate experience in the U.S. I’d estimate that the best thing to do would be to find a school that was valuable for other reasons and where (1) the head coach’s favorite event is CEDA/NDT (2) the ((debate program budget)/debater) value is high. The funding is important because practical things like a room just for the debate team, travel/food/hotel subsidies are important for filling out a debate team and giving them a sense of community and the size and quality of the team will be a large source of the value of the experience. You might also try to maximize the “tournaments per team member per year” which might vary from school to school based on the costs of travel given the school’s location.
The only major warning with this suggestion, is that a lot of the value of learning to debate rigorously is just that you’ll pick up library skills, policy debate theory, the ability to notice (and produce) debating tricks on the fly, and confidence speaking in front of an audience. Learning debate to practice rationality is kind of like learning to knife fight in order to practice saving people. The skill might have uses in the target domain, but they are definitely not the same thing.
(Though now that I spell out the warning, it might work as a vote for being paid to work in a startup where calculating semi-autonomy is encouraged rather than paying for school in pursuit of theoretically useful ideas? Hmmm...)
It’s less the insight just displayed and more a general tendency to see Pareto improvements in group rationality. But debate’s an interesting idea.
Bear in mind that, like many good works of pop science, the vast majority of what the Sequences present is other people’s ideas; I’m much more confident of the value of those ideas than of the parts that are original to Eliezer.
And who filtered that particular and exceptionally coherent set of “other people’s ideas” out of a vastly larger total set of ideas? Who stated them in (for the most part) clear anti-jargon? I would not even go into the neighborhood of being dismissive of such a feat.
Originality is the ultimate strawman.
I don’t mean to be dismissive at all—leaving aside original content like the FAI problem, the synthesis that the Sequences represent is a major achievement, and one that contributes to making the clarity of writing possible.
There’s not much he could be proven wrong about. What EY mainly accomplished is to put the right pieces, that have already been out there before him, together and create a coherent framework.
But since I’ve only read maybe 5% of LW I might be wrong. Is there something unique that stems from EY?
Another problem is that what EY is saying is sufficiently vague so that you cannot argue with it if you do not doubt some fundamental attributes of reality.
I’m not trying to discredit EY. I actually don’t know of any other person that comes even close to his mesh of beliefs. To the extent that I’m much more relaxed since I know about him for that if I was going to die there’s everything and much more I ever came up with contained inside EY’ mind :-)
Anyway, I can’t help and often muse about the possibility that EY is so much smarter that he actually created the biggest scam ever around the likelihood of uFAI to live of donations by a bunch of nonconformists. - “Let’s do what the Raelians do! Let’s add some nonsense to this meme!”
Of course I’m joking, hail to the king! :-)
Yeah, huge red flag. I’ll also note that reading Eliezer’s stuff made me feel like I got to extend my beliefs in the same direction away from mainstream that they were already skewed, which is probably why I was extremely receptive to it.
Even though I’ve learned a lot, I don’t get to congratulate myself for a real Mind Change.
Or this.
Thanks! I guess there’s a good reason not to have a ‘cultishness’ tag, but still, it’d be kinda cool...
There are not very many seriously written-up position statements from Eliezer. So, it probably doesn’t represent a very attractive target for “academics” to attack.
There are a couple of papers about the possibility of THE END OF THE WORLD. That is an unconvential academic subject—partly because no instances of this have ever been observed.