Maybe it’s because his brain is so large that my mirror neurons have to fire three times faster to compensate, but I always get so frustrated when watching Eliezer discussing things with non-SIAI people.
Start with a bit of LW’s own “specialized cult jargon” (I kid, really!)… specifically the idea of inferential distance.
Now imagine formalizing this concept more concretely than you get with story-based hand waving, so that it was more quantitative—with parametrized shades of grey instead of simply being “relevant” or “not relevant” to a given situation. Perhaps it could work as a quantitative comparison between two people who could potentially Aumann update with each other, so that “ID(Alice,Bob) == 0 bits” when Alice knows everything Bob knowsand they already believe exactly the same thing, and can’t improve their maps by updating about anything with each other. If its 1 bit then perhaps a single “yes/no Q&A” will be sufficient to bring them into alignment. Larger and larger values imply that they have more evidence (and/or more surprising evidence) to share.
(A simple real world proxy for ID(P1,P2) might be words read or heard by P1 that P2 wrote or spoke. The naive conversion from words to bits would then be to multiply words by ~10 to get bits of information while crossing your fingers and hoping that every word was a novel report of evidence rather than a re-summarization of basically the same information that might let evidential double-counting sneak in the back door. So maybe “ID(Alice,Bob) == 50 bits” means there are five perfectly chosen words that Bob could say to let Alice sync with him?)
Now consider naively (IE imagine that everyone is a baseline human operating mostly on folk wisdom) that Alice and Bob are in a debate being judged by Jim where Jim is forced to judge in favor of one or the other debater, but not both or neither. Given this background information, H, what do you think of the specific probability estimate:
PROB ( J judges for A | H and ID(J,A) < ID(J,B) )
If this is 0.5 then the concept of inferential distance gives no special predictive power about about how Jim will judge. I think this is unlikely, however, given what I suspect about the kinds of mistakes Alice and Bob will make (assuming things intelligible to themselves are intelligible to everyone) and the kinds of mistakes that Jim will make (thinking that if something isn’t transparently obvious then whatever was said is just wrong). My guess would be that Jim would judge in favor of Alice more often, simply because he already deeply understands more of what she says in the course of the debate.
So… I think the critical question to ask is what evidence from the world might Robert Wright have talked about if he hadn’t been wrongfooted when he was pulled into Eliezer’s unfamiliar frameworks for describing optimization processes and for doing expectation-based-argumentation (that you’re already familiar with but that Robert presumably hasn’t read up on).
In point of fact, Robert has published several books with lots of evidence even if he isn’t good at defending himself from Eliezer’s rhetorical jujitsu. Basically none of the contents of his books came out because, although Robert offered helpfully leading questions about Eliezer’s area of specialization (which Eliezer complimented him on—I think maybe misunderstanding his basic-conversational-generosity for agreement-and-hence-intelligence) Eliezer didn’t reciprocate which meant that the video audience didn’t get to see Robert’s specialist knowledge.
The non-zero-sum dynamic, Wright says, is the driving force that has shaped history from the very beginnings of life, giving rise to increasing social complexity, technological innovation and, eventually, the Internet. From Polynesian chiefdoms and North America’s Shoshone culture to the depths of the Mongol Empire, Wright plunders world history for evidence to show that the so-called Information Age is simply part of a long-term trend. Globalization, he points out, has been around since Assyrian traders opened for business in the second millennium B.C. Even the newfangled phenomenon of “narrowcasting” was anticipated, he claims, when the costs of print publishing dropped in the 15th century and spawned a flurry of niche-oriented publications. Occasionally, Wright’s use of modish terminology can seem glib: feudal societies benefited from a “fractal” structure of nested polities, world culture has always been “fault-tolerant” and today’s societies are like a “giant multicultural brain.” Despite the game-theory jargon, however, this book sends an important message that, as human beings make moral progress, history, in its broadest outlines, is getting better all the time.
This sounds to me like a lot of non-fictional evidence. My guess is that Wright is ultimately just more interested in the Invisible Hand than in Azathoth and sees the one “deity” as being more benevolent than the other. If I generously misinterpret him as claiming this, I notice that I’m already willing to believe this because Azathoth seems kind of scary and horrifying to me. If I imagine more evidence this way I’m more inclined to believe it…
So expect that if the conversation in the video had been more about “cooperative truth seeking” than about “debate winning”, then Robert’s would have said something and justified it in a way that improved my thinking.
I think a lot of what’s scary about many real world epistemic failure modes is not that they are full of gross logical fallacies, or involve wearing silly clothes, or get you to work on truly positive “public goods”, but that that they deflect you from acquiring certain kinds of evidence without your even noticing it.
Why must you ruin my self-conscious countersignalling with good epistemology?!
But seriously… Ack! Jennifer, you’re brilliant. I dunno what they put in the water at that CCS place. Would you accept me as your apprentice? I hear tell you have a startup idea. I can’t code, but I live very cheaply and can cheerfully do lots of menial tasks and errands of all kind, from in-the-field market research to buying donuts to washing dishes to answering customer questions and everything else. I’m versatile, energetic and a wicked good rationalist. And I feel that working for you even for a short while would significantly build my understanding of social epistemology and epistemology generally, helping me in my quest to Save the World. Doesn’t that sound like a totally awesome idea? :D
Your compliments are appreciated but, I suspect, unwarranted :-P
I’m not saying “definitely no” and I think it would be cool to work with you. But also you should probably reconsider the offer because I think the right question (tragically?) is not so much “Can I work with you to somehow learn your wisdom by osmosis?” but “Where are the practice grounds for the insight just displayed?” My working theory of “intellectual efficacy” is that it mostly comes from practice.
Following this theory, if you’re simply aiming for educational efficiency of the sort that was applied here, you could do much worse than getting some practice at competitive inter-collegiate policy debate (sometimes called CEDA or NDT depending on the region of the US).
I would attribute my insight here not to “something in the water” at the CCS (the College of Creative Studies at UCSB, which for the record, I just hung out at because that’s where my friends were), but to experiences before that on a college debate team in a two year program that included a debate tournament approximately every third weekend and about 10 hours per week in a college library doing research in preparation for said tournaments.
Here is a partial list of four year colleges that have policy debate teams.
If you were going to go for the best possible debate experience in the U.S. I’d estimate that the best thing to do would be to find a school that was valuable for other reasons and where (1) the head coach’s favorite event is CEDA/NDT (2) the ((debate program budget)/debater) value is high. The funding is important because practical things like a room just for the debate team, travel/food/hotel subsidies are important for filling out a debate team and giving them a sense of community and the size and quality of the team will be a large source of the value of the experience. You might also try to maximize the “tournaments per team member per year” which might vary from school to school based on the costs of travel given the school’s location.
The only major warning with this suggestion, is that a lot of the value of learning to debate rigorously is just that you’ll pick up library skills, policy debate theory, the ability to notice (and produce) debating tricks on the fly, and confidence speaking in front of an audience. Learning debate to practice rationality is kind of like learning to knife fight in order to practice saving people. The skill might have uses in the target domain, but they are definitely not the same thing.
(Though now that I spell out the warning, it might work as a vote for being paid to work in a startup where calculating semi-autonomy is encouraged rather than paying for school in pursuit of theoretically useful ideas? Hmmm...)
Start with a bit of LW’s own “specialized cult jargon” (I kid, really!)… specifically the idea of inferential distance.
Now imagine formalizing this concept more concretely than you get with story-based hand waving, so that it was more quantitative—with parametrized shades of grey instead of simply being “relevant” or “not relevant” to a given situation. Perhaps it could work as a quantitative comparison between two people who could potentially Aumann update with each other, so that “ID(Alice,Bob) == 0 bits” when Alice knows everything Bob knowsand they already believe exactly the same thing, and can’t improve their maps by updating about anything with each other. If its 1 bit then perhaps a single “yes/no Q&A” will be sufficient to bring them into alignment. Larger and larger values imply that they have more evidence (and/or more surprising evidence) to share.
(A simple real world proxy for ID(P1,P2) might be words read or heard by P1 that P2 wrote or spoke. The naive conversion from words to bits would then be to multiply words by ~10 to get bits of information while crossing your fingers and hoping that every word was a novel report of evidence rather than a re-summarization of basically the same information that might let evidential double-counting sneak in the back door. So maybe “ID(Alice,Bob) == 50 bits” means there are five perfectly chosen words that Bob could say to let Alice sync with him?)
Now consider naively (IE imagine that everyone is a baseline human operating mostly on folk wisdom) that Alice and Bob are in a debate being judged by Jim where Jim is forced to judge in favor of one or the other debater, but not both or neither. Given this background information, H, what do you think of the specific probability estimate:
PROB ( J judges for A | H and ID(J,A) < ID(J,B) )
If this is 0.5 then the concept of inferential distance gives no special predictive power about about how Jim will judge. I think this is unlikely, however, given what I suspect about the kinds of mistakes Alice and Bob will make (assuming things intelligible to themselves are intelligible to everyone) and the kinds of mistakes that Jim will make (thinking that if something isn’t transparently obvious then whatever was said is just wrong). My guess would be that Jim would judge in favor of Alice more often, simply because he already deeply understands more of what she says in the course of the debate.
So… I think the critical question to ask is what evidence from the world might Robert Wright have talked about if he hadn’t been wrongfooted when he was pulled into Eliezer’s unfamiliar frameworks for describing optimization processes and for doing expectation-based-argumentation (that you’re already familiar with but that Robert presumably hasn’t read up on).
In point of fact, Robert has published several books with lots of evidence even if he isn’t good at defending himself from Eliezer’s rhetorical jujitsu. Basically none of the contents of his books came out because, although Robert offered helpfully leading questions about Eliezer’s area of specialization (which Eliezer complimented him on—I think maybe misunderstanding his basic-conversational-generosity for agreement-and-hence-intelligence) Eliezer didn’t reciprocate which meant that the video audience didn’t get to see Robert’s specialist knowledge.
Here is a bit from Amazon’s quote of Publisher’s Weekly review of Robert’s book “Nonzero” describing the kinds of things Robert could have been talking about if Eliezer had “played along for the sake of argument” before going into attack mode:
This sounds to me like a lot of non-fictional evidence. My guess is that Wright is ultimately just more interested in the Invisible Hand than in Azathoth and sees the one “deity” as being more benevolent than the other. If I generously misinterpret him as claiming this, I notice that I’m already willing to believe this because Azathoth seems kind of scary and horrifying to me. If I imagine more evidence this way I’m more inclined to believe it…
So expect that if the conversation in the video had been more about “cooperative truth seeking” than about “debate winning”, then Robert’s would have said something and justified it in a way that improved my thinking.
I think a lot of what’s scary about many real world epistemic failure modes is not that they are full of gross logical fallacies, or involve wearing silly clothes, or get you to work on truly positive “public goods”, but that that they deflect you from acquiring certain kinds of evidence without your even noticing it.
Why must you ruin my self-conscious countersignalling with good epistemology?!
But seriously… Ack! Jennifer, you’re brilliant. I dunno what they put in the water at that CCS place. Would you accept me as your apprentice? I hear tell you have a startup idea. I can’t code, but I live very cheaply and can cheerfully do lots of menial tasks and errands of all kind, from in-the-field market research to buying donuts to washing dishes to answering customer questions and everything else. I’m versatile, energetic and a wicked good rationalist. And I feel that working for you even for a short while would significantly build my understanding of social epistemology and epistemology generally, helping me in my quest to Save the World. Doesn’t that sound like a totally awesome idea? :D
Your compliments are appreciated but, I suspect, unwarranted :-P
I’m not saying “definitely no” and I think it would be cool to work with you. But also you should probably reconsider the offer because I think the right question (tragically?) is not so much “Can I work with you to somehow learn your wisdom by osmosis?” but “Where are the practice grounds for the insight just displayed?” My working theory of “intellectual efficacy” is that it mostly comes from practice.
Following this theory, if you’re simply aiming for educational efficiency of the sort that was applied here, you could do much worse than getting some practice at competitive inter-collegiate policy debate (sometimes called CEDA or NDT depending on the region of the US).
I would attribute my insight here not to “something in the water” at the CCS (the College of Creative Studies at UCSB, which for the record, I just hung out at because that’s where my friends were), but to experiences before that on a college debate team in a two year program that included a debate tournament approximately every third weekend and about 10 hours per week in a college library doing research in preparation for said tournaments.
Here is a partial list of four year colleges that have policy debate teams.
If you were going to go for the best possible debate experience in the U.S. I’d estimate that the best thing to do would be to find a school that was valuable for other reasons and where (1) the head coach’s favorite event is CEDA/NDT (2) the ((debate program budget)/debater) value is high. The funding is important because practical things like a room just for the debate team, travel/food/hotel subsidies are important for filling out a debate team and giving them a sense of community and the size and quality of the team will be a large source of the value of the experience. You might also try to maximize the “tournaments per team member per year” which might vary from school to school based on the costs of travel given the school’s location.
The only major warning with this suggestion, is that a lot of the value of learning to debate rigorously is just that you’ll pick up library skills, policy debate theory, the ability to notice (and produce) debating tricks on the fly, and confidence speaking in front of an audience. Learning debate to practice rationality is kind of like learning to knife fight in order to practice saving people. The skill might have uses in the target domain, but they are definitely not the same thing.
(Though now that I spell out the warning, it might work as a vote for being paid to work in a startup where calculating semi-autonomy is encouraged rather than paying for school in pursuit of theoretically useful ideas? Hmmm...)
It’s less the insight just displayed and more a general tendency to see Pareto improvements in group rationality. But debate’s an interesting idea.