Mandatory Secret Identities
Previously in series: Whining-Based Communities
“But there is a reason why many of my students have achieved great things; and by that I do not mean high rank in the Bayesian Conspiracy. I expected much of them, and they came to expect much of themselves.” —Jeffreyssai
Among the failure modes of martial arts dojos, I suspect, is that a sufficiently dedicated martial arts student, will dream of...
...becoming a teacher and having their own martial arts dojo someday.
To see what’s wrong with this, imagine going to a class on literary criticism, falling in love with it, and dreaming of someday becoming a famous literary critic just like your professor, but never actually writing anything. Writers tend to look down on literary critics’ understanding of the art form itself, for just this reason. (Orson Scott Card uses the analogy of a wine critic who listens to a wine-taster saying “This wine has a great bouquet”, and goes off to tell their students “You’ve got to make sure your wine has a great bouquet”. When the student asks, “How? Does it have anything to do with grapes?” the critic replies disdainfully, “That’s for grape-growers! I teach wine.”)
Similarly, I propose, no student of rationality should study with the purpose of becoming a rationality instructor in turn. You do that on Sundays, or full-time after you retire.
And to place a go stone blocking this failure mode, I propose a requirement that all rationality instructors must have secret identities. They must have a life outside the Bayesian Conspiracy, which would be worthy of respect even if they were not rationality instructors. And to enforce this, I suggest the rule:
Rationality_Respect1(Instructor) = min(Rationality_Respect0(Instructor), Non_Rationality_Respect0(Instructor))
That is, you can’t respect someone as a rationality instructor, more than you would respect them if they were not rationality instructors.
Some notes:
• This doesn’t set Rationality_Respect1 equal to Non_Rationality_Respect0. It establishes an upper bound. This doesn’t mean you can find random awesome people and expect them to be able to teach you. Explicit, abstract, cross-domain understanding of rationality and the ability to teach it to others is, unfortunately, an additional discipline on top of domain-specific life success. Newton was a Christian etcetera. I’d rather hear what Laplace had to say about rationality—Laplace wasn’t as famous as Newton, but Laplace was a great mathematician, physicist, and astronomer in his own right, and he was the one who said “I have no need of that hypothesis” (when Napoleon asked why Laplace’s works on celestial mechanics did not mention God). So I would respect Laplace as a rationality instructor well above Newton, by the min() function given above.
• We should be generous about what counts as a secret identity outside the Bayesian Conspiracy. If it’s something that outsiders do in fact see as impressive, then it’s “outside” regardless of how much Bayesian content is in the job. An experimental psychologist who writes good papers on heuristics and biases, a successful trader who uses Bayesian algorithms, a well-selling author of a general-audiences popular book on atheism—all of these have worthy secret identities. None of this contradicts the spirit of being good at something besides rationality—no, not even the last, because writing books that sell is a further difficult skill! At the same time, you don’t want to be too lax and start respecting the instructor’s ability to put up probability-theory equations on the blackboard—it has to be visibly outside the walls of the dojo and nothing that could be systematized within the Conspiracy as a token requirement.
• Apart from this, I shall not try to specify what exactly is worthy of respect. A creative mind may have good reason to depart from any criterion I care to describe. I’ll just stick with the idea that “Nice rationality instructor” should be bounded above by “Nice secret identity”.
• But if the Bayesian Conspiracy is ever to populate itself with instructors, this criterion should not be too strict. A simple test to see whether you live inside an elite bubble is to ask yourself whether the percentage of PhD-bearers in your apparent world exceeds the 0.25% rate at which they are found in the general population. Being a math professor at a small university who has published a few original proofs, or a successful day trader who retired after five years to become an organic farmer, or a serial entrepreneur who lived through three failed startups before going back to a more ordinary job as a senior programmer—that’s nothing to sneeze at. The vast majority of people go through their whole lives without being that interesting. Any of these three would have some tales to tell of real-world use, on Sundays at the small rationality dojo where they were instructors. What I’m trying to say here is: don’t demand that everyone be Robin Hanson in their secret identity, that is setting the bar too high. Selective reporting makes it seem that fantastically high-achieving people have a far higher relative frequency than their real occurrence. So if you ask for your rationality instructor to be as interesting as the sort of people you read about in the newspapers—and a master rationalist on top of that—and a good teacher on top of that—then you’re going to have to join one of three famous dojos in New York, or something. But you don’t want to be too lax and start respecting things that others wouldn’t respect if they weren’t specially looking for reasons to praise the instructor. “Having a good secret identity” should require way more effort than anything that could become a token requirement.
Now I put to you: If the instructors all have real-world anecdotes to tell of using their knowledge, and all of the students know that the desirable career path can’t just be to become a rationality instructor, doesn’t that sound healthier?
Part of the sequence The Craft and the Community
Next post: “Beware of Other-Optimizing”
Previous post: “Whining-Based Communities”
- Comment reply: my low-quality thoughts on why CFAR didn’t get farther with a “real/efficacious art of rationality” by 9 Jun 2022 2:12 UTC; 260 points) (
- Is Rationalist Self-Improvement Real? by 9 Dec 2019 17:11 UTC; 259 points) (
- How I Ended Up Non-Ambitious by 23 Jan 2012 23:50 UTC; 253 points) (
- Extreme Rationality: It’s Not That Great by 9 Apr 2009 2:44 UTC; 249 points) (
- Rereading Atlas Shrugged by 28 Jul 2020 18:54 UTC; 161 points) (
- Carrying the Torch: A Response to Anna Salamon by the Guild of the Rose by 6 Jul 2022 14:20 UTC; 136 points) (
- Creating a truly formidable Art by 14 Oct 2021 4:39 UTC; 131 points) (
- 19 Apr 2012 22:54 UTC; 123 points) 's comment on A question about Eliezer by (
- Go Forth and Create the Art! by 23 Apr 2009 1:37 UTC; 88 points) (
- Whining-Based Communities by 7 Apr 2009 20:31 UTC; 84 points) (
- The Craft & The Community—A Post-Mortem & Resurrection by 2 Nov 2017 3:45 UTC; 76 points) (
- Here’s a List of Some of My Ideas for Blog Posts by 26 May 2022 5:35 UTC; 48 points) (
- The Craft and the Community by 26 Apr 2009 17:52 UTC; 45 points) (
- Go Do Something by 21 May 2019 15:42 UTC; 38 points) (
- Fallacies of reification—the placebo effect by 13 Sep 2012 7:03 UTC; 31 points) (
- Secret Identities vs. Groupthink by 9 Apr 2009 20:26 UTC; 23 points) (
- The Altruistic Retreat by 24 Jun 2024 9:37 UTC; 18 points) (EA Forum;
- Rationality Dojo Examples? by 15 Nov 2011 21:28 UTC; 17 points) (
- 23 Apr 2022 23:54 UTC; 16 points) 's comment on Re: So You Want to Be a Dharma Teacher by (
- Extreme Rationality: It Could Be Great by 9 Apr 2009 22:00 UTC; 15 points) (
- 7 Aug 2012 13:02 UTC; 8 points) 's comment on Self-skepticism: the first principle of rationality by (
- [SEQ RERUN] Mandatory Secret Identities by 15 Apr 2013 7:08 UTC; 6 points) (
- 6 Apr 2014 8:23 UTC; 4 points) 's comment on Observational learning and the importance of high quality examples by (
- 4 Nov 2011 20:10 UTC; 4 points) 's comment on Meetup : Paris, Sunday November 13 by (
- Book Suggestion: “Diaminds” is worth reading (CFAR-esque) by 3 May 2013 0:19 UTC; 3 points) (
- 22 May 2013 22:37 UTC; 3 points) 's comment on Preparing for a Rational Financial Planning Sequence by (
- 11 Jan 2015 17:16 UTC; 2 points) 's comment on Who are your favorite “hidden rationalists”? by (
- 1 Jan 2014 0:17 UTC; 2 points) 's comment on Why CFAR? by (
- 30 Sep 2022 0:46 UTC; 2 points) 's comment on Resources to find/register the rationalists that specialize in a given topic? by (
What does this post even mean? I don’t have access to my own respect function, and I don’t know if I’d mess with it this way even if I did.
If you were to say tomorrow “I’ve been lying about the whole AI programmer thing; I actually live in my parents’ basement and have never done anything worthwhile in any non-rationality field in my entire life,” then would I have to revise my opinion that you’re a very good rationality teacher? Would I have to deny having learned really valuable things from you?
Or would I have to say, “Well, this guy named Eliezer taught me everything I know, he’s completely opened my mind to new domains of knowledge, and you should totally read everything he’s written—but he’s not all that great and I don’t have any respect for him and you shouldn’t either” when referring people to your writing?
Or to put it another way...let’s say there are two rationality instructors in my city. One, John, is a world famous physicist, businessman, and writer. The other, Mary, has no particular accomplishments outside her rationality instruction work. However, Mary’s students have been observed to do much better at their careers than John’s, and every time the two dojos go up against each other in Rationalist Debating or calibration tests or any other kind of measurement, Mary’s students do better. Wouldn’t it be, well, irrational for me to go to John’s dojo instead of Mary’s? Would the Bayesian Police have to surround Mary’s dojo and make sure her students don’t say nice things about her or pay her more money than John is making?
But the fact that reality doesn’t disentangle this way, is in a sense the whole point—it’s not a coincidence that things are the way they are.
If we get far enough to have external real-world standards like those you’re describing, then yes we can toss the “secret identity” thing out the window, so long as we don’t have the problem of most good students wanting only to become rationality instructors themselves as opposed to going into other careers (but a teacher who raised their students this way would suffer on the ‘accomplished students’ metric, etc.). But on the other hand I still suspect that the instructors with secret identities would be revealed to do better.
I’ve never seen anything from Eliezer that proves that he’s done anything at all of value except be a rationality teacher. I know of two general criteria by which to judge someone’s output in a field that I am not a part of:
1) Academic prestige (degrees, publications, etc.) and 2) Economic output (making things that people will pay money for).
Eliezer’s institution doesn’t sell anything, so he’s a loss on part 2. He doesn’t have a Ph.D or any academic papers I can find, so he’s a loss on part 1, as well. Can SIAI demonstrate that it’s done anything except beg for money, put up a nice-looking website, organize some symposiums, and write some very good essays?
To be honest, I’d say that his output matches the job description of “philosopher” than “engineer” or “scientist”. Not that there’s anything wrong with that. Many works that fall broadly under the metric of philosophy have been tremendously influential. For example, Adam Smith was a philosopher.
Eliezer seems to have talents both for seeing through confusion (and its cousin, bullshit) and for being able to explain complicated things in ways that people can understand. In other words, he’d be an amazing university professor. I just haven’t seen him prove that he can do anything else.
Yes—in fact, the only thing that leads me to suspect that EY and SIAI are doing anything worth doing is the quality of EY’s writings on rationality.
EY has a lengthy article in this volume if that counts as academic.
As has been said, being a theoretician seems distinct enough from teaching that it should count as a day job. I still view Eliezer as more of a teacher than a theoretician, but I don’t think Eliezer is saying teachers don’t have to be completely divorced from their subject in their day job to avoid affective death spirals.
Right. Our difference of opinion here is clearly nontrivial. I’ll put it on the list of things to write posts about.
Are you saying that teachers who don’t externally practice the thing they’re teaching won’t make good teachers? Or that they’re not worthy of respect at all? If the former, I agree with Yvain and others that we have better metrics for determining teacher quality. If the latter, I’m not sure why this would be the case. The comparison to literary critics doesn’t answer that question; it just accesses our assumed cached thoughts about literary critics. What’s the problem with people wanting to be literary critics?
The post proposes a required formula for respect, but it never explains what quantity that formula intends to maximize. What’s the goal here?
Is the point about respect for instructors supposed to generalize to instructors of disciplines other than rationality?
If so, what do you make of Nadia Boulanger? Her accomplishments as a musician (or otherwise) are unimpressive relative to those of her students and peers, and yet she is regarded as one of the greatest music teachers ever, and is accorded correspondingly deep respect by music historians, composers, etc. Are they all wrong to respect her so much, or does it not apply to music or this case?
It seems to me that a better formula for determining respect would somehow reflect the respect given to her students which they say is significantly due to her influence as a teacher. For example, if Aaron Copland singles her out as an amazing teacher who profoundly affected his musical life & education, then she deserves some of the respect given to him. And likewise for her many other students who went on to do great things.
There seems to be an implicit underlying belief in this post that teaching is not (or should not be) an end in and of itself, or at least not a worthy one. I think Boulanger and teachers of her caliber show that that’s just not the case.
I was thinking about that—a clause for respecting teachers with great students, should they have them. It still gives people the right incentives.
You’ve got things the wrong way round. It is the quality of the teacher’s students that tell us whether we wish to study under her. The teachers own achievements are a proxy which we resort to because we need to decide now, we cannot wait to see the longer term effects on last years students.
Another proxy is the success of the teacher in getting her students through examinations. This is a proxy because we don’t really want the certificate, we what the achievement that we think it heralds. We can assess the strength of this proxy checking whether success in the examinations really does herald success in real life.
I agree with the conclusion of the original post but find the argument for it defective. The key omission is that we don’t have a tradition of rationality dojo’s, so we do not yet have access to records of whose pupils went on to greatness. Nor do we have records that would validate an examination system.
Notice that the problems of timing are inherent. The first pupils, who went on to real world success, prove their teachers skill in an obvious way, but how did they choose their teacher? Presumably they took a risk, relying on a proxy that was available in time for the forced choice they faced.
Yes, precisely. The issue isn’t how we can become a better teacher, or find one to study under. The first question, that MUST be asked before all others, is: what does it mean to be a good teacher, and how can we define the relevant differences between teachers?
Once that question has an answer, we can begin searching for ways to make ourselves better match that defined meaning, or in signal traits in others that indicate they’re likely to match that definition well.
Concepts like “has students that will accomplish great things” aren’t useful for a variety of reasons. And once someone has developed a reputation for being a great teacher, they’re likely to attract students with a lot of potential (assuming there are working metrics for potential that are actually consulted, as opposed to rich people simply buying a place for their talentless children). The reputation alone would result in the teacher’s students doing better than most.
Evaluating the teacher requires that we have some way of determining, or at least guessing, what a student’s performance would have been without the teaching.
Isn’t teaching itself a skill? So what that she was a bad musician, she was obviously a first rate teacher (independent of the subject that she taught).
As long as we determine how much of their students’ success is attributable to the teacher, it seems reasonable. It seems we could make those sorts of judgments by:
comparing the success of the students of a teacher with the success of students of other teachers having equally talented students (e.g., compare Boulanger’s students’ success with that of students of contemporaneous Fontainebleau teachers); or
when successful people have typically studied with many different teachers, asking them how much of their success they attribute to the influence of their various teachers.
I do find cases like this surprising, though. What was it that she was able to teach to her students that she could not put to use herself?
She gives a pattern of feedback that makes the students practice well? In the sense that she gives positive feedback she functions more as a motivator than as a teacher. Her skill is teaching, it’s only happenstance that she teaches music; has she taught shoe polishing or finger painting she would have produced the best shoe polishers and the most skilled finger painters.
Perhaps she doesn’t have many complex skills but has strong fundamentals (think Tim Duncan of the NBA Spurs). She might make her students practice the fundamentals which will allow them to do more complex work as they get older.
Finally, she might have knowledge more advanced than her skill. She might not have the hand eye coordination or the processing speed to play sophisticated music but she might know how it’s done. Imagine a 5 foot tall jewish guy that loves basketball. He’s not gonna make the NBA. It’s simply not gonna happen. However, he might understand the game better than many NBA players. Likewise he might be the best basketball coach in the world even though his athleticism (and hence his basketball playing skills) is less than that of NBA players. Likewise the teacher might have had a strong theoretical understanding but not have had the ability to put her theoretical knowledge into practice.
The first thing that comes to mind is maybe she’s able to teach students how to practice more in their youth than she did.
That’d work at least.
I was thinking of Nadia Boulanger, with Astor Paizzolla as the distinguished pupil. Piazzolla was trying to be a classical composer, but Boulanger said his classical music was lifeless, it was his tangos that had fire.
Perhaps the multi-talented young pupil faces perilous choices about where to focus his energies. Whether the older great teacher can warn effectively against the common errors probably depends on having a breadth of experience, perhaps having put in many years as a mediocre teacher, following up pupils and noticing how things worked out. The teacher’s own youthful errors might be uniformative even if severe.
Yeah, definitely surprising, but genius in any form is surprising.
There is an essay here by a former student that gives a sense of how she taught. And Philip Glass describes her as the decisive influence on him in this article, which also talks about her teaching a little.
Here’s an interesting passage from Nadia Boulanger: A Life in Music:
It was Nadia’s manner rather than her materials that was unique. Her intensity, her emotional involvement with her students, her broad knowledge of music in general, and her ability to project her own passionate enthusiasm for each detail as well as its over-all form, were the qualities that made her extraordinary. Her electric personality brought a distinctiveness to everything that Nadia did. In this, lies what one reviewer called “the difference between good teaching and great teaching,” for in the latter “the student feels that the teaching enacts an extraordinarily intimate and demanding relation between the teacher and his subject, a relation such that the teacher’s sense of his subject is indistinguishable from his sense of life.”
Has this requirement been successfully implemented for CFAR instructors?
It’s not a formal requirement, but I’m personally impressed by the prior accomplishments of some of the CFAR staff and some of the careers of CFAR consultants. And there is at least one person who somehow has a non-CFAR career while also working full-time for CFAR.
EDIT: Read about them here.
In the boost phase of a newly launched idea, it’s actually a really good idea to train teachers. That gives you exponential growth.
It’s a fail if the discipline gets into a death spiral about teaching teachers to teach teachers, iff the recursion lacks a termination condition. (Suitable conditions left as an exercise.)
Even in the cruise phase, an idea needs a teacher replacement rate of >= 1.
In cruise phase, it’s a fail if every student wants to teach. But I don’t see how it’s a fail if some students want to teach and proceed to do so. Nor do I see how it’s a fail if they end up being most of the teachers.
The intersection of two very rare categories of people is nobody.
Aren’t you the same guy who, just a few days ago, pointed out how much better a trained professional is at his job than some volunteer? Teaching is a nontrivial skill.
Most memes that grow exponentially do not manage to stay sane. What’s best for the exponential growth of a meme (as though it were a bacterium with no identity other than itself) may not be best for the culturation of a cause. Exponential growth is good, I agree, but the fastest possible exponential growth… seems more doubtful.
Exponential growth around friends giving friends copies of a fixed book, seems safer and shallower; the book won’t change as it’s passed around. The building-up of rationality dojos is a longer, slower endeavor which should be more carefully gotten right.
Plus you can look for students who’ve already done something interesting with their lives and train them to be the teachers.
Hmm. I’ll concede a measured rate of startup, although I can’t offhand think of any meme that got deranged by fast growth (and was sane to begin with).
Perhaps, adopt the martial art idea of not giving out too many certificates-to-teach, and using lineage to check accreditation?
No.
You’re wasting huge amounts of optimization power, here, in two different ways. Firstly, you’re saying that no one should focus his efforts on becoming a good rationality instructor, that any work he does on that is entirely meaningless unless he is at least as good at something else. Secondly, you’re saying that no one should focus his efforts on instructing people in rationality, that they should spend most of their time on whatever other thing it is that makes them impressive. If you have someone who is naturally better at instructing people in rationality than in anything else, you are wasting most of the surplus you could have gained from him in these two ways.
I’m sympathetic to your concern, but surely there must be a way we can avoid throwing out the baby with the bathwater?
Well… go ahead and suggest a way to avoid throwing out the baby with the bathwater? I mean, we’re talking about some pretty scary bathwater here.
Personally I suspect that the bathwater only really gets dirty when you are teaching something that is essentially useless in modern society, like martial arts or literary criticism. Most people who study, say, engineering don’t do so in the hopes of becoming teachers of engineering.
Now you might say that this is because teachers of engineering are expected to also do research, but firstly that doesn’t explain the disparity between fields, and secondly, I don’t think that the example of tertiary education is one to aspire to in this way. I seem to recall you are an autodidact, so you may not have the same trained gut reaction I do, but I have seen too many people who did not have the skill of teaching but were good researchers teaching horribly, and I remember one heartbreaking example of an excellent teacher denied tenure because the administrators felt his research was not up to snuff too well, to want to optimize rationality teachers on any basis other than their ability to teach rationality.
Martial arts seem to get an unreasonably bad rep on LW. It’s at least as useful as painting or writing fiction, and I consider those to be fine personal development endeavours.
While I think martial arts are pretty useful by hobby standards (although their usefulness is broad enough that they might not be optimal for specialists in several fields), several historical and cultural factors in their practice have combined to create an unusually fertile environment for certain kinds of irrationality.
First, they’re hard to verify: what works in point sparring might not work in full-contact sparring, and neither one builds quite the same skillset that’s useful for, say, security work, or for street-level self-defense, or for warfare. It’s difficult to model most of the final applications, both because they entail an unacceptably high risk of serious injury in training and because they involve psychological factors that don’t generally kick in on the mat.
Second, they’re all facets of a field that’s too broad to master in its entirety in a human lifetime. A serious amateur student can, over several years, develop a good working knowledge of grappling, or of aikido-style body dynamics, or empty-hand striking, or one or two weapons. The same student cannot build all of the above up to an acceptable level of competence: even becoming sort of okay at the entire spectrum of combat is a full-time job. (Many martial arts claim to cover all aspects of fighting, but they’re wrong.)
Despite this, though, almost every martial art claims to do the best job of teaching practical fighting for some value of “practical”, and every martial art takes a lot of pride in its techniques. As a consequence, there’s a lot of posturing going on between nearly incommensurate systems. There have been various attempts at comparing them anyway (MMA is the most popular modern framework): they’re better than nothing, but in practice usually come out too context-dependent to be very useful from a research perspective.
On top of that, there’s a tradition of secrecy, especially in older systems (koryu, in Japanese martial arts parlance). Until well after WWII, it was uncommon for any system to open its doors to ethnic outsiders, often even to familial outsiders. Until the Eighties it was uncommon for systems to welcome cross-training in their students. Many still require instructors to have trained in only the system they teach. This is intended to prevent memetic cross-contamination but in practice serves to foster the wide range of biases that come with isolation and hierarchy: you can make almost anything work on your own students, as Eliezer’s memorable example about ki powers demonstrates. (If you’re feeling uncharitable, you could probably make an analogy here to the common cultic practice of isolation.)
Finally, a lot of selection pressure’s eased off the martial arts in the modern era. During the Sengoku era, for example, Japanese martial arts were clannish and highly secretive, but it didn’t matter too much: two hundred years of warfare made it very clear which taught viable techniques, if only by extinguishing poorer schools. Most other martial cultures were in a position to gain similar feedback, if less intensely. In the 20th century, though, martial arts grew more or less disconnected from martial applications: most militaries still teach simplified systems, but martial arts skill rarely decides engagements, and when it does it’s in a narrower range of situations. Same goes for all the civilian jobs where martial arts are useful: there’s feedback, but it’s narrow, uncommon, and hyperspecialized.
I think there are ways around all of these problems, but no arts that I know of have done a very good job of engaging them systematically (though at least the more modern intersectional martial arts are trying—JKD comes to mind). This actually wouldn’t be a bad exercise in large-scale instrumental optimization, except that it requires a pool of talent that at present doesn’t exist in any organized way.
(Disclaimer: as is probably obvious by now, I am a martial artist.)
Thanks for a thoughtful reply!
You could say much the same about painting/dancing/cooking/writing: There are many different sub-arts; it’s hard to master all of them; practitioners can become unduly wedded to a single style; there are examples of styles that have “gone bonkers”; there are many factors in place that hurt the rationality of practitioners.
These are all valid concerns, but I don’t think they’re particularly problematic within martial arts in comparison to other hobbies.
You could say point 2 about those, but points 1 and 3 stand.
If you are half-way decent at painting/dancing/cooking/writing and think you’re pretty good, it is unlikely to get your face stove in the first time you try it seriously. This leads to your getting feedback and improving. You can watch serious, nothing-held-back demonstrations as public performances (or to take home and study, in the case of writing) for a nominal fee.
Really? I’ve always thought the opposite; that there’s a common sense on this site that martial arts are a discipline worthy of taking seriously and investing far more attention in than I would have thought they merited with respect to their applications to rationality. I may be very interested in martial arts, but in most of my social outlets I don’t have nearly as much of a sense of it being a shared interest.
Painting and writing fiction produce items that can then be enjoyed by many other people who are neither writers nor painters. Martial arts produces almost nothing, aside from an occasional sports event.
Fair point. But this depends on things starting out healthy so that they stay healthy.
[mistake] How about RatRespect1 = min(RatRespect0, sqrt(RatRespect0)^2 + (NonRatRespect0)^2))?
[edit] Confound you, Pythagoras! What I meant to say was...
RatRespect1 = min(RatRespect0, sqrt(RatRespect0 x NonRatRespect0)
There’s no sudden ceiling, but you still get wiped for neglecting the real world.
Do you people actually think in terms of equations like this? Once you begin throwing in exponents, I think the metaphorical/illustrative value of expressing things in math drops off quickly.
Not very well in my case, it seems, my apologies. Exponents now thrown out again.
That wasn’t meant as a criticism of you specifically. I’ve just noticed that people on this site like to use equations to describe thought processes, some of which might be better communicated using everyday language. I’d argue Eliezer’s post is an even worse example—why not just say “the lesser of the two quantities” or something?
To be fair, for people who are used to thinking in math, pseudo-mathematical notation is as readable as English, with advantages of brevity and precision.
“People used to thinking in math” currently describes a large portion of users on this site. Use of gratuitous mathematical notion is likely to help keep it that way.
“Use of gratuitous mathematical notion is likely to help keep it that way.”
Is that desirable? (Not saying you’re implying it is.) The community could probably benefit from some smart humanities types.
I was actually trying to imply that it isn’t desirable, so yes, I agree fully.
First post, so I’ll be brief on my opinion. I would say “it depends”. To communicate between people and even to clarify one’s own thoughts, a formal language, with an appropriate lexicon and symbols, is a key facilitator.
As for desirability of audience, the About page says “Less Wrong is an online community for discussion of rationality”, with nothing about exclusivity. I would suggest that if a topic is of the sort that newbies and lay people would read, then English is better; if more for the theorists, then math is fine.
I personally find “min(A,B)” clearer than “the lesser of A and B”, but I’m on the autistic spectrum.
Oh. I thought that the use of min( ) here, was immediately readable and transparent to me. The meaning of “the lesser of the two quantities” is less obvious, and the phrase is longer to say.
hmmm...it’s a little awkward reading the math without TeX, but I think assuming all variables real, that simplifies to RatRespect1=RatRespect0
How about RatRespect1 = min(RatRespect0, sqrt(RatRespect0)^2 + (NonRatRespect0)^2))? There’s no sudden ceiling, but you still get wiped for neglecting the real world.
HT to Pythagoras.
How about Rat_Respect1 = min(Rat_Respect0, sqrt(Rat_Respect0)^2 + (Non_Rat_Respect0)^2))? There’s no sudden ceiling, but you still get wiped for neglecting the real world.
HT to Pythagoras.
I agree with this comment vociferously.
The upper bound isn’t a terrible idea, but it would, for example, knock E.T. Jaynes out of the running as a desirable rationality instructor, as the only unrelated competent activity I can find for him is the Jaynes-Cumming Model of atomic evolution, which I have absolutely zero knowledge of.
Dude, what on Earth are you talking about. E. T. Jaynes was a Big Damn Polymath. I seem to also recall that in his later years he was well-paid for teaching oil companies how to predict where to drill, though that’s not mentioned in the biography (and wouldn’t rank as one of his most significant accomplishments anyway).
Not something I was aware of, but good to know.
I wasn’t aware of anything from before his career as an academic, 1982-onward. His wikipedia article doesn’t mention anything but the atom thing. But he certainly set out to be a Professor of rationality-topics.
Regardless of the merits of E. T. Jaynes, we should place the activity of a rationality instructor in a separate mental bucket than a rationality theoretician. I would say that making a significant original intellectual advance counts as a real accomplishment.
Well if we develop rationality tests, then you should rely on the teachers who help their students do better on tests. And if you can’t develop tests, then I don’t see why you’d think you had evidence that any particular person was good at teaching rationality. Relying on their ability to do something useful as a predictor of their ability to teach rationality seems nearly as bad as relying on their publication record, or their IQ, or wealth, etc. I say focus on developing tests.
(Blinks.)
I wonder if this idea comes as a shock because everyone was planning on becoming rationality instructors, i.e., I should have warned everyone about this much earlier?
Is it offputting on some other level?
But I must also consider that it might really be that stupid. Damn, now I wish I knew the actual number of upvotes and downvotes!
I don’t think too many people are actually considering “rationality instructor” as a career path at this point—which reminds me—what exactly are your plans for this rationality dojo thing anyway? Is it just something you like to talk about, or something you plan to one day set up? Are you hoping people from Less Wrong will start the first ones, or that people from Less Wrong will be students in ones set up in some other way?
When you (or anyone) says “rationality dojo”, how literally is it meant? Is it specifically a physical meeting, rather than a web community? More literally, is this meant as a meeting of equals or an instructor with pupils? How much of the formalism of the dojo would you import? How would you change the relationship of sensei and pupil? I’m not so sure about wearing robes, and I draw the line at getting thwacked on the head with a stick.
I am keen to increase the number of rational people, but there are a great many means by which a thing passes from mind to mind, and I’m not sure a dojo would be the first model I’d reach for—have I missed a post where this model is set out in more detail?
I don’t think I can afford to divert my attention into setting one up, but I’ve heard others already discussing it, so it’s worth placing some Go stones around it.
Really? If it’s not too private, who’s been discussing it?
I don’t know if I’m part of who Eliezer heard, but I’m planning on trying to start a rationality training group on Saturdays in the SF bay area, for middle and high school students with exceptional mathematical ability. I want to create a community that thinks about thinking, considers which kinds of thinking work for particular tasks (e.g., scientific progress; making friends), and learns to think in ways that work. The reason I’m focusing on kids with exceptional mathematical ability is that I’m hoping some of them will go on to do the kind of careful science humanity needs, with the rationality to actually see what actually helps. The aim is not so much to teach rationality knowledge, since AFAICT the “art of human rationality” is mostly a network of plausible guesswork at this point, but to get people aiming, experimenting, measuring, and practicing in a community, sharing results, trying to figure out what works and actually trying the best ideas (real practice; community resistance to akrasia). With some mundane math teaching mixed in.
As to “day job” credentials, I’ve had unusual success teaching mathematical thinking (does this count as “day job”? at least math teaching success is measurable by, say, the students’ performance on calculus exams), bachelor degrees in math and “great books”, and two or three years’ experience doing scientific research in various contexts. I don’t know if this would put me above or below Eliezer’s suggested bar to a stranger.
How has this project been going?
You’re focusing on easy-to-verify credentials of the sort you’d list on a resume to be hired by some skeptical HR person. You have a secret identity.
My secret identity just says that some combination of you and Michael Vassar thought I was worth taking a chance on. I was trying to do some analog of cross-validation, where we ask whether someone who was basically following your procedure but who didn’t know me or have particular faith in your or Michael Vassar’s judgment, would think it okay for me to try teaching. I was figuring that your focus on day job impressiveness was an attempt to get away from handed-down lineages of legitimacy / “true to the Real Teachings”ness, which Objectivism or martial arts traditions or religions sometimes degenerate into.
More of an attempt to make sure that people write instead of just doing literary criticism.
Got it. Sorry; I think I rounded you to the nearest cliche, maybe because of the emotional reaction you suggested some of us might be having.
FWIW, part of my own emotional reaction to it did come from that, though I noticed and have my reaction tagged as an emotionally contaminated thing to be wary of.
I had hoped to become a rationality instructor of some stripe, but with an apprentice period as an experimental physicist, in order to give concreteness to my teaching.
So, no particular degree of shock here.
Speaking only for myself—I am here, consciously and explicitly, to learn rationality for its own benefits. I have no overwhelming interest in teaching others and, all else equal, have other things I would prefer to be doing with my life.
I didn’t vote either way on the post because I am ambivalent to it. It felt underdeveloped compared to your usual material, and to some extent seems like you’re getting ahead of yourself on this “teaching rationality” thing—the current understanding of applied rationality in this community here doesn’t seem to justify raising the concern yet.
Perhaps the idea would have been better presented in the context of one of your parables/short stories/&c.?
Huh? There is no way that knowledge of astronomy could possibly have told him about the olive crop. It seems more likely that his useful knowledge was of economics and business, but that he made up a story about astronomy to impress his peers.
This is a good example of what I meant over in the evolutionary psychology thread; coming up with evolutionary psychology explanations is a good practice to avoiding succumbing to ‘arguments from incredulity’, as I like to call this sort of comment.
“Oh, I couldn’t think of how astronomy could possibly be useful in weather or crop forecasting, so I’ll just assume the stories about Thales are a lie.”
I’ll leave this here for you.
″ Forecasting Andean rainfall and crop yield from the influence of El Niño on Pleiades visibility”, Nature 403, 68-71 (6 January 2000):
I read it as an injunction to focus on fixing my own rationality and making best use of it, and not to think about how to help other people be more rational. That runs entirely contrary to my own hopes for making the world a better place. If all you mean is “spread rationality, but keep the day job” then absolutely, I’m keeping the day job, it pays better.
The idea of a rationality pressure group has crossed my mind, but if I were to work for such a thing it would not be in the role of instructor, and I could probably do more for such an organisation by keeping the day job and giving it money in any case.
It’s an idea that is common among writers (with respect to writing instructors). Not the secret identity part, though.
Eliezer’s idea is a bit different, because success in any area of life should indicate rationality.I don’t understand the secret identity part. If one identity is secret, how are students supposed to know whether to respect the instructor for accomplishments under his/her non-instructor identity?
(If you’re a rationality instructor or practitioner, having a secret identity is probably a good idea anyway, so you’re not the first against the wall when the religious-Luddite anti-transhuman pogrom begins.)
He’s joking about the secret part—think “day job”
The idea never occurred to me—not when I was sincerely involved in martial arts, and not since becoming sincerely dedicated to rationality. I’d be quite surprised if it has occurred to more than a few people here.
Perhaps few readers are thinking about becoming rationality instructors, so they feel it doesn’t apply to them. That would likely diminish their estimation of its importance.
Some thoughts from my experience in a martial arts dojo:
We avoid lots of failure modes by making sure (as far as reasonably possible) that people are there to train first and everything else second. One consequence of this is that we don’t attach a whole lot of our progress to any particular instructor; we’re blessed with a number of people who are really good at aikido, and we learn from all of them, and from each other.
On setting the bar too high for instructors: Most martial arts rely on a hierarchy of instructors, where the average dojo head is a reasonably normal person who is expert but not necessarily elite at the discipline. The “famous” people in the art travel around and deliver seminars to everybody else. Dojo head type people will also travel to attend more seminars than the average junior student, for obvious reasons.
All sorts of human enterprises work this way (although the formality of the hierarchy varies widely); everything from yoga to religions to Linux Users Groups. It’s a good system.
How much of what you’re trying to do could be accomplished by largely tabooing the term “rationality” in rationality dojos, and having the community be really really attached to that tabooing? So that the dojos are for “finding ways of thinking that actually bring accurate beliefs” and “finding ways of thinking that actually help people reach their goals”, with mostly no mention of a term like “rationality” that’s easy to reify? If we talked like that, actual and prospective students and teachers might naturally look outward, to the evidence that various thinking processes were or weren’t helping. Such evidence would be found partly in terms of the actual “day job” accomplishments (or lack of accomplishments) of the teacher, and also in terms of “day job” accomplishments of the students after vs. before joining the group, and also in terms of any measures that a group of active, experimentally minded rationality students could think up of whether they were actually becoming better at forming accurate beliefs.
You can taboo a word, or even a concept, but you can’t taboo a meaningful regularity, pretend that it’s not there. The problem with belief-in-belief-in-rationality is the same as with other lost concepts, one of the essential lessons to learn, not something to shoo away. If you can’t attain even this, what aspirations?
I’m not proposing we pretend there’s no regularity to “types of thinking that help us form accurate beliefs, across domains”. Not at all. I’m proposing we stay attentive to the evidence as to what those types of thinking actually are and aren’t, by spelling out our full goal as much as possible. If we use the term “rationality” as a shorthand instead of spelling out that we’re after “types of thinking that actually help us form accurate beliefs”, it’s easy for the term “rationality” to become un-glued from the goal. So that “rationality” gets glued to “that thing we do in the rationality dojo” or to “whatever the Great Teacher said” or to “anything that sets me apart from others and lets me feel superior, like using long sentences and being socially awkward”, instead of being a term for those meaningful regularities we’re actually trying to study (the meaningful regularities in thinking methods that actually work).
Well, yes, I agree that a rationality dojo should talk about lost purposes, about the trouble with belief in belief in general, and about what exactly goes wrong when people speak overmuch of “rationality” instead of keeping their eyes on the prize. Is this supposed to be in tension with the suggestion that we, as a community, build a strong norm against talking overmuch of “rationality” and for, instead, speaking of “kinds of thinking that help us form accurate beliefs / achieve our goals”? I’m imagining that it’s precisely by having a really clear view of the standard “lost purposes” failure modes, and of their application to “rationality” learning, that we can maintain such a norm.
But for some reason we are talking about a specific failure mode, one that is not necessarily the single best case to demonstrate the general principles, and one that by itself is clearly insufficient. Investing disproportionally in this single case must have additional purposes.
I can see two goals:
Safeguarding the movement on early stages, where it’s easy to start in a wrong direction
Acting as a safety vent, compensating for the difficulty in certifying the sanity of the movement.
What work is the word “secret” doing in this post? It seems to me that you’re talking about public identities, ones visible to outsiders, ones that potential students (not yet enrolled in the Conspiracy) can look at to evaluate would-be instructors. Are you using the phrase “secret identities” merely because it sounds cool?
Ditto with “conspiracy.” I’d argue that giving LW the language and trappings of a 12-year old boys’ club is ultimately detrimental to its mission, but it looks like I’m in the minority.
The business about the Bayesian Conspiracy is, I think, more an in-joke than anything else. Eliezer’s written various bits of fiction set in a future world featuring an actual “Bayesian Conspiracy”, and he’s on record as saying that there’s something to be said for turning things like science and rationality into quasi-mystery-religions (though I expect he’d hate that way of putting it) -- but he’s not suggesting that we actually should, nor trying to do so.
Dunno whether such things help or hinder the mission of LW. I think it would be difficult to tell.
It just seems at odds with the scientific ethos of cutting out the bullshit whenever possible. Instead, Eliezer seems bent on injecting bullshit back into the mix, which I’d argue comes at the expense of clarity, precision, and credibility. However, I do realize it’s a calculated decision intended to give normally dry ideas more memetic potential, and I’m not in a position to say the trade-off definitely isn’t worth it.
Deliberately so. The original OB posts started with it as a thought experiment, “what if we kept science secret, so people would appreciate its Awesome Mysteries?”
Despite that, I think that whole style is a tremendous mistake. It’s an interesting thought experiment, but we should be clear that it runs completely counter to the things that actually bring about accurate results.
Ironic, rather. I considered “Mandatory Alternate Lives” but “alternate life” simply doesn’t have the phrase-recognition impact of “secret identity”. There is no phrase that means exactly what I want; so I use “secret identity” in an obviously inappropriate way.
I think the phrase you want is “day job”.
If anything it’s the teaching that ought to be under a nom de plume. I’ve heard more than once the complaint about universities, that they care more about getting an impressive name than whether he can teach.
How much of Objectivism’s failure was due to its teachers not having developed sufficient awesomeness elsewhere, and how much was due to the fact that it, say, tried to claim that it had the One True Method of Thought, instead of fostering an environment where all teachings were conjectural, teachers were facilitators of investigation instead of handers-down of The Answer, and everyone together tried to figure out what worked?
I mean, to what extent can we avoid similar failure modes by fostering a culture that doesn’t reify anyone’s teachings, but that instead tries to foster a culture of experimenting, thinking up new strategies, pooling data, and asking how we can tell what does and doesn’t work?
I do not think this analogy fits. Martial arts is a self-contained bubble. What else is there to do but teach? To use a variation on the analogy, if someone being trained in the United States Marine Corps were given the question of what a truly dedicated student of the USMC were to become, they would probably answer along the lines of someone who kills things and doesn’t die while doing it.
(Minor point) Martial arts do have tournaments and the like, so I suppose that is an alternate path. There is an inherent time-limit on those activities because of the human aging process, however, and after you retire from fighting, what else is there to do but teach what you learned?
This second analogy is in a world where there is something more to do than teach. Choosing not-writing over writing is a failure mode in literary arts but not for literary criticism. Literary criticism is too specific. If you study literary criticism it naturally follows that you will become a literary critic. (Minor point) I could be completely misunderstanding what you meant but “writing anything”.
Also, in a sense you digress from the pattern in the first analogy. Being a student of martial arts is to teaching martial arts as being a student literary criticism is to teach literary criticism.
In my opinion, the lesson to be learned here is to study something that can be applied to the real world in a form other than simply teaching other people what you learned. (Someone else mentioned something similar to this in the comments, but I cannot find it again via skimming.)
All that being said, this statement does loosely follow:
But I imagine that a lot of people who desire to eventually teach will do so after a full life of other activities. I also imagine this is a decent way to keep generational bias out of a system of education, since the next generation of students will likely spawn new variations and having those variations inserted back into the system by a “newer” instructor can encourage growth. Having the same ol’ instructors around limits that somewhat.
The computer science analogy of this would be classes taught by old fogies who still think of programming as playing with decks of punch cards. Even if they move beyond that in their knowledge, their mind is potentially tainted by what-once-was.
I think that accademia is also subject to this mode of failure. As an exercise, try to think of great literary figures who were also professors of literature at major universities. Off the top of my head, I can think of exactly one: Vladimir Nabokov, and he was notably contemptuous of his colleagues. Can anyone else think up anymore?
Unsurprisingly, Paul Graham has some interesting thoughts on the subject (incidentally, I seem to be developing a reputation on another forum that I post on as the “obligatory Paul Graham link guy”):
In this case, what we should really worry about is developing good tests to distinguish good rationalists from the phonies beyond just “runs a rationality dojo”. Obviously, being able to apply the principles of instrumental rationality inorder to succeed at a field unrelated to rationality is one such test, but is it the only one? I think the issue warrants further study.
Obviously success in other realms is bayesian evidence that someone would make a better rationality instructor. But as many others have argued, in this post Eliezer exaggerates the importance of this type of evidence.
I have a question: why are you panicking about this now? It’s not like we have a huge problem yet with too many teachers, or too many freshly founded schools.
So that, having written up my thoughts on the subject, I can vanish into an appropriately dark basement for 5 years and not find armies of deranged Objectivists when I peek out? I’m trying to write up now everything that needs to be written up, which includes a contingency in case The Book takes off (should it be written and sold).
The traditional fix is to anoint some disciples to teach in your stead.
Yes, and my impression has been that annointed disciples are generally the instigators of things going subtly wrong in self-reinforcing ways. People with big, novel ideas are not necessarily good judges of character.
Especially if the interview lasts 5 minutes and the jackass winds up writing half your scripture.
(no, I don’t hold a grudge against Paul, what are you talking about?)
I’ve been expecting a deliberately daft post from Eliezer Yudkowsky and/or Robin Hanson to see whether we vote them up just based upon status.
I think this is it.
Eliezer has been very clear on OB that he doesn’t write things with the intention of covertly testing or manipulating his audience. (Of course, anyone who did test or manipulate his audience might say the same thing...)
And, of course, they wouldn’t even admit to it afterwards, in all likelihood!
If I were going to do this, I would write something that flattered my audience—this does the very opposite.
Besides which, we know that EY is voted down based on status—there was a discussion of it in the March open thread.
Makes sense, though I will quibble with your opening line. What you say about martial arts dojos was probably true up until about twenty years ago, but today I suspect a sufficiently dedicated martial arts student is in fact dreaming of becoming a champion MMA fighter.
And you know, now that I think about it, even twenty years ago, I’m not sure anyone was dreaming of becoming a dojo owner. That was just what they could practically achieve. But they were dreaming of becoming a Dark Lord:
I guess the failure mode that you’re concerned with is a slow dilution because errors creep in with each successive generation and there’s no external correction.
I think that the way we currently prevent this in our scientific efforts is to have both a research and a teaching community. The research community is structured to maximise the chances of weeding out incorrect ideas. This community then trains the teachers.
The benefits of this are that you get the people who are best at communicating doing the teaching and the people who are the best at research doing research.
Is it possible that having taught yourself you haven’t so directly experienced that there’s not necessarily a correlation between a persons understanding of a subject and their ability to teach it?
Hm. Arguably I should only be worried about fast dilution rather than slow dilution. But I’m also worried that the community grows slower if it’s inward-looking, and hope for faster growth if it’s involved with the outside world.
Entirely possible. But I’m not sure I have so much faith in the system you describe, either. The most powerful textbooks and papers from which I get my oomph are usually not by people who are solely teachers—though I haven’t been on the lookout for exceptions, and I should be.
Er, I thought the difference between religious and scientific teachings was that scientific teachings didn’t have to worry about dilution? It seems like you put a high probability on this community disappearing into a death spiral of some sort without you—I would have thought we should worry more that we’re already in one which we haven’t picked up on.
More of a difference between things that are hard vs. easy to teach and measure. Businesses have the same problem with a great CEO trying to hire great employees, dilution of corporate culture, etc. - they have highly quantifiable output at the end of the day, but in the middle of the day and the middle steps of the process, it’s not as easy to measure.
I anticipate a beginning period extending for at least several years when we don’t have good metrics because we’re still trying to develop them.
I think that you can legitimately worry about both for good reasons.
Fast growth is something to strive for but I think it will require that our best communicators are out there. Are you concerned that rationality teachers without secret lives won’t be inspiring enough to convert people or that they’ll get things wrong and head into death spirals?
From a personal perspective i don’t have that much interest in being a rationality teacher. I want to use rationality as a tool to make the greatest success of my life. But I also find it fascinating and, in an ideal world, would stay in touch with a ‘rational community’ as both a guard against veering off into a solo death spiral and as a subject of intellectual interest. I’m sure that there must be other people like me that are more accomplished and could give inspiring lectures on how rationality helped them in their chosen profession. That would go some way to covering the inspiration angle.
As an aside i appreciate why you care about this; I’m always a bit suspicious of self help gurus who’s only measurable success is in the self help theory they promote. I wonder whether I’m selecting for people who effectively sell advice rather than effectively use advice.
The mini-intro to this post on the craft and community sequence page says that it was not well received. But the requirements that this write up recommends really act as beautiful safeguard against becoming pedantic. If I hadnt read this page quite early (before I got past the 25% mark on the sequences), I doubt I would have stopped myself from falling into a happy death spiral (I honestly still really struggle with that one all the time).
It’s really hard for me even now to “not speak over much of the way” (though, I mostly think it to myself, not too many friends are into this kind of thing). But knowing how important that is, certainly helps.
Update: I’m over it now. :D
If you have “something to protect”, if your desire to be rational is driven by something outside of itself, what is the point of having a secret identity? If each student has that something, each student has a reason to learn to be rational—outside of having their own rationality dojo someday—and we manage to dodge that particular failure mode. Is having a secret identity a particular way we could guarantee that each rationality instructor has “something to protect”?
It’s very easy to believe that you’re being driven by something outside yourself, while primarily being driven by self-image. It’s also very easy to incorrectly believe this about someone else.
Sometimes I wonder if the only people who aren’t driven primarily by self-image/status-seeking are sociopaths (the closest human analogue of UFAI).
Sociopaths care a lot about status, and the most extreme sociopaths respond to attempts to reduce their status with violence. I strongly suggest Jon Ronson’s “The Psychopath Test” for a highly informative and amusing introduction to psychopathy/sociopathy and its symptoms.
My understanding of sociopaths makes this seem like approximately the opposite of true. It is the drives other than seeking self-image and status that are under-functioning in sociopaths.
What then do you call someone like the Joker from Batman—someone who cares not at all how they fit into or are perceived by human society, except as instrumental to gaining whatever (non-human-relationship-based) thrill or fix they are after?
Fictional?
Beat me to the exact one word reply I was about to make!
The reply is a non-sequitur, because even if one accepted the implied unlikely propsition that no such persons exist or ever have existed, the terminological question would remain.
I don’t think so: psychiatry has no need for terms that fail to refer. (On the other hand, psychiatry might have a term for something that doesn’t exist—because it once was thought to have existed.)
At the risk of stating the obvious: I did not intend to restrict the terminological question to psychiatry specifically.
But in any event: you could say the same thing about zoology. And yet we still have the word unicorn.
Unicorns were indeed once thought to have actually existed.
Your understanding of the “non-sequitur” fallacy is evidently flawed. You asked a question. The answer you got is not only a literally correct answer that follows from the question it is practically speaking the It isn’t non-sequitur. It’s the most appropriate answer to a question that constitutes a rhetorical demand that the reader must generalize from fictional evidence.
But you want another answer as well? Let’s try:
This question does not make sense. The Joker isn’t someone who doesn’t care how they are perceived. He is obsessed with his perception to the extent that he, well, dresses up as the freaking Joker and all of his schemes prioritize displaying the desired image over achievement over pragmatic achievement of whatever end he is seeking. No, he cares a hell of a lot about status and perception and chooses to seek infamy rather than adoration.
Thrill seeking fix? That’s a symptom of psychiatric problems for sure, but not particularly sociopathy.
Some labels that could be applied to The Joker: Bipolar, Schizophrenic, Antisocial Personality Disorder. Sociopath doesn’t really capture him but could be added as an adjunct to one (probably two) of those.
Charitable interpretation of komponisto’s comment: ‘If a human didn’t care about social status except instrumentally, what would be the psychiatric classification for them?’ (Charitable interpretation of nshepperd’s comment: ‘Outside of fiction, such people are so vanishingly rare that it’d be pointless to introduce a word for them.’)
I’m afraid the first interpretation is incompatible with this comment (because the Joker reference conveys significant information). Actually, this does qualify as a charitable interpretation of something kompo made elsewhere (grand-neice comment or something). This distinction matters primarily in as much as it means you have given a highly uncharitable interpretation of nshepperd’s comment. By simple substitution it would mean you interpret him as saying:
Rather than being clearly correct nshepperd becomes probably incorrect. Many (or most) people with autism could fit that description for a start.
It was not intended to do so; army1987′s paraphrase is correct.
The thought in my original comment would have been better expressed as: “Sometimes I wonder if the only people who aren’t motivated by status are antisocial.”
This intent does not make the paraphrase correct, even within the scope of ‘charitable’. More to the point, it does prevent the paraphrase of nshepperd’s comment from being uncharitable. Army1987 put words in nsheppard’s mouth that are probably wrong rather than the obviously correct statement he actually made. He described this process as ‘charitable’. It is the reverse.
I was talking about my comment only; I make no claim that army1987′s paraphrase of nshepperd’s comment is likewise accurate.
(I’m not sure if I’m mistaken about the following interpretation and you instead mean that this particular intent doesn’t make the paraphrase (of komponisto’s comment) correct; in that case I’m not following what you are saying at all.)
I expect the intended meaning of “correct” was correspondence with intended meaning. In this sense, the intent is relevant, and it seems that the paraphrase does correspond to the intended meaning as described by komponisto in grandparent.
The grandparent is talking only about army1987′s paraphrase of komponisto’s comment, not about the paraphrase of nsheppard’s comment (which I agree is better described as “uncharitable”), so I’m not seeing the relevance of this statement in a reply to grandparent. (Disagree with some connotations of “obviously correct” in the quote, as the case is not that clear overall, even as it is pretty clear in one sense.)
The statement he actually made—taken literally and ignoring the poor example komponisto had chosen, as the “someone like” makes clear that it was intended to be just an example—is that the word he would use for “someone who cares not at all how they fit into or are perceived by human society, except as instrumental to gaining whatever (non-human-relationship-based) thrill or fix they are after” is “fictional”. How is that “obviously correct”?
There was no demand to “generalize” from fictional evidence, except to recognize the theoretical possibility a sociopathic character who is indifferent to status concerns.
The intended question is whether such characters can exist and if so what’s their diagnosis. Your response “fictional” would be reasonable if you went on to say, “that’s a fiction; such a pathology doesn’t exist in the real world.” Or at least, “It’s atypical” or “it’s rare″; “sociopaths usually go for status.” Or, to go with your revised approach, “psychopaths go for status as they perceive it, but it doesn’t necessarily conform to what other people consider status.” (This approach risks depriving “status” of any meaning beyond “narcissistic gratification.”)
The answer, anyway, is that psychopaths have an exaggerated need to feel superior. When they fail at traditional status seeking, they shift their criteria away from what other people think. They have a sense of grandiosity, but this can have little to do with ordinary social status. Psychopaths are apt to be at both ends of the distribution with regard to seeking the ordinary markers of status.
Objectionable personal psychological interpretation removed at 2:38 p.m.
Thankyou.
That’s an untenable interpretation of the written words and plain rude. (Claiming to have) mind read negative beliefs and motives in others then declaring them publicly tends to be frowned upon. Certainly it is frowned upon me.
The simplest minimally charitable interpretation of the remark seems to be saying that in a slightly snarky fashion.
In my humble opinion, snarkiness is a form of rudeness, and we should dispense with it here.
Moreover, since we have a politeness norm, it isn’t so clear that the interpretation you offer is charitable!
His behavior is not consistent with what is generally described as sociopathy. Again, Ronson’s book may help here.
So again, what would be the term for the (apparently distinct) phenomenon that I mean to refer to? Is this covered in Ronson’s book as well (presumably for purposes of contrast)?
I’m not sure that your phenomenon exists to any substantial extent in the real world. Also, keep in mind that categorizing mental illness is in general difficult. It isn’t that uncommon to have issues where one psychologist will diagnose someone as schizophrenic, while another will say the same person is bipolar, etc even as everyone agrees there’s something deeply wrong with them. So even if your people in your like-the-Joker category exists in some form, it may be that there isn’t any term for them.
Apparently distinct? What do you mean by that? “A coherent concept that can be described as part of a counterfactual reality?” Sure, it just isn’t something that is instantiated in an actual human being. That’s what medical science deals with and that’s where the term ‘sociopath’ is used and definied.
You’re after “literary criticism”. Or, given the subject matter, TVTropes. The best term among them is probably Chaotic Evil. The Joker even gives it the tagline. Laughably Evil also works. That trick with the pencil is one of Heath Ledger’s best moments.
If it does happen to be that would be a remarkable coincidence. It would be similar in nature but less extreme than Ronson happening to make comparison’s to Yudkowskian “Baby Eaters”.
I’m afraid in this comment and in your other you are allowing your debating skills to obscure any substantive discussion that my original comment might have prompted.
And yes, I fully anticipate that your wit is sharp enough to offer a retort to the effect that the comment in question deserved no better response. Since I don’t at this precise moment regard the topic as sufficiently interesting to justify the level of effort I am having to put into this conversation, I will simply note my disagreement and move on.
How dare you! You accused me of employing one of the most basic (and in my opinion the most dire) logical fallacies—when I most certainly didn’t, either denotatively or connotatively. Of course I’m going to reply. It’s personally offensive to me as well as false.
As for substance, you were given plenty—even if you didn’t like it. Even the second of the two comments you are trying to frame as merely clever and insubstantial tried to analyse the question from multiple angles, including challenging your description of the psychological traits of the fictional character in question and giving a best effort attempt to give you the diagnosis you were seeking:
I’m not a psychiatrist and The Joker isn’t real but if I was and he was those really are the kind of labels that myself and my colleagues are likely to apply, in various combinations. We wouldn’t all agree—even with actual humans our diagnoses often differ and the Joker, being the creation of cartoon writers not remotely trying to be realistic, is harder to fit into a distinct category than most humans.
JoshuaZ gave you substance too, including a reference to resources that explain what sociopathy is actually like.
I’m reminded of the recent discussion of Eliezer’s rumored fully general mind-hacks. Even his proof that such a thing is impossible can’t prove anything except that that’s what he wants people to think. Having that much wit would be rather handy!
Sure, I think I’m clever but I don’t think that is your problem here. I think the problem is that you were mistaken about an aspect of reality, clung to an untenable position instead of updating, aggressively defended generalization from fictionalized evidence despite the local norms that deprecate it and, most importantly, made false accusations of fallacy use.
And I will note that you have chosen to do so in a manner that I evaluate as a rather significant interpersonal defection while showing what seems to be a complete disregard to the standards of reasoning lesswrong is associated with. (My model of how to note disagreement “simply” includes less backhanded status attacks.)
I asked “what is the term for X?” and you (or, strictly, another commenter, whose comment you endorsed) replied “Fictional!”. You know perfectly well that that was nothing but a wisecrack reply. To state the freaking obvious, the meaning of “fictional” is “not real” and is thus much, much, broader than what I was looking for. For one thing, the term includes heroes as well as villains! There are plenty, plenty of fictional characters who do not meet the description I provided (a description which was not even intended to be taken literally, but merely as a pointer to the closest empirical cluster—as is the standard convention in ordinary human conversation, which this was intended as an instance of, because [newsflash!] the original comment was an offhand remark!)
And no, I did not in that instance mean to accuse you of a fallacy. The “non sequitur fallacy” is only one of two commonly used senses of the term “non sequitur”. The other is a remark which is inappropriate in the context. For example, if I say “The moon is made of green cheese”, and you, instead of saying “What?! No it isn’t”, say instead, “I wonder whether my uncle Harry would like to buy a new car”, that could be described as a “non sequitur”—an utterance which isn’t an appropriate way to follow the previous one. That is what I meant to accuse you of. Maybe it was an ill-considered accusation, maybe there is a better, more precise term for a wisecrack remark that superficially appears to answer the question but actually doesn’t and is merely a rhetorical way to dismiss the question and cause the asker to lose status....but I didn’t think of it in time—I was too busy acting quickly to fend off what I expected would be an onslaught of upvotes for you (or, rather, your confederate), maybe even accompanied by downvotes for me.
Anybody trying to be charitable would realize, would assume, that the fictional character was cited only for the sake of convenience. Now, evidently we have a substantive disagreement about whether the traits in question are actually possessed by any real humans, but the reference was made before that disagreement was revealed. Had I known your and JoshuaZ’s beliefs about the matter, I never would have used a fictional example.
I don’t actually care, in this context, about what sociopathy is “actually like” if the word refers to a phenomenon other than the one I intended to refer to. If you and JoshuaZ believe the phenomenon I had in mind doesn’t exist, that would have been enough of a nontrivial point to make without going into the tangential subject of the separate, unrelated phenomenon that (apparently) receives the label in standard clinical discourse.
Well, I’m sorry to hear that—but I felt under attack from your comments, which seemed rhetorically excessive and out of proportion to my own. I was merely seeking to “tap out” without conceding anything.
To be sure, I expressed disagreement regarding the inappropriateness too but the difference in interpretation regarding whether the ‘fallacy’ sense applies is interesting (well, slightly, anyhow). By my reading both senses apply. The first (“WTF? That’s completely irrelevant.”) is obviously there. While your question and nsheppard’s reply constitute a simple question and answer pair they also convey implied arguments. That is, a rhetorical question with an answer that invalidates the implied argument of that question. If the answer is non-sequitur (“Well, that was random”) then the implied argument is, in fact, fallacious reasoning.
Note that even if the question is interpreted to be nothing more than an expression of curiosity the answer still represents an argument. Something along the lines of “The Joker is fictional. Psychiatric diagnosis categories are created for real people. There doesn’t need to be any psychiatric label that applies to a category represented by a fictional entity.” That implied argument would certainly be falacious if the answer was irrelevant.
The above said I can certainly see why you could legitimately interpret the fallacy as not applying and I am naturally willing to retroactively change my claimed offense to the charge that I was saying things that make no sense in the context. ;)
My original charitable interpretation was abandoned when “fictional” was challenged as non-sequitur and the Joker was maintained over a series of comments. The most significant benefit-of-the-doubt destroyer was actually a reply to this comment by JoshuaZ that doesn’t seem to exist any more.
For what it is worth if you had said “Lex Luthor” I would have agreed that he (approximately) represents real sociopaths and even agreed that such people are the closest thing that we have to UFAI. It is only the details of what a sociopath actually is that I disagreed with.
That much I wouldn’t object to.
Do you think, in retrospect, it might have been better to give an answer like “I doubt that there are enough people in reality who fit your description for there to be an established term for the category.” instead of “fictional”? It seems like that would have gotten your point across more clearly and helped avoid a lot of the subsequent side-track into whether “fictional” is a sensible answer or not.
Absolutely not. Nsheppard’s is perhaps the most salient comment in the entire thread, closely followed by genius’s follow up. This site would be a worse place if it was not made. I would of course not have expressed my agreement with nsheppard if I had predicted that it would receive a hostile response but would most certainly have defended nsheppard if the ‘non-sequitur’ accusations were then directly leveled at him instead of me.
(Your answer is a good one too, and I would have liked to see that comment made in addition to the ‘fictional’ comment.)
I note that nsheppard’s “fictional” answer remains at +5 at the time of this comment and this is despite it being subjected to a tantrum which can usually be expected to significantly lower the rating. This indicates that my continued endorsement of his reply is actually in line with consensus.
There are other things I would of course write differently in retrospect, and participants who I have learned to interact with differently (if at all) in the future—but the ‘fictional’ comment is most definitely not the place at which I would intervene to counterfactually change the past if I could.
If you’ll pardon me while I reciprocate with a similar question, why did you think it was a good idea to ask me the quoted question? By my estimation even casually following my comments for a month would be enough to predict with significant confidence that that kind of reply to a rhetorical question is something that I would reflectively endorse myself making or upvote from others. Most people could probably predict that even just having read the context in this thread. Of course I am going to disagree.
The aforementioned entirely predictable disagreement doesn’t mean that you can’t assert your position but it does mean that if you ask a direct question then my possible responses are ignore or retort. (Or, of course, lie, obfuscate or fog but let’s focus on the direct responses.) I know you don’t like (or, I suppose, your past self didn’t like) ‘ignore’ and replying with disagreement just amounts to extending the exact same pointless side-track that you wanted to avoid.
So I ask you, is the problem that you didn’t think it through or that my preferences regarding how questions like that should be responded to are insufficiently transparent? And this is a surprisingly sincere question. One of the many posts that I’d like to write but only have the rudimentary notes prepared for actually is “On Rhetorical Questions and the Response Thereto” (although I think I’d come up with a better name). And yes, it would include a section endorsing the kind of response you suggest here, too.
It’s the latter. In fact even after reading your comment I still don’t understand why you think “fictional” is a good reply in addition to my suggestion. You said
But I don’t understand why this is true. Can you explain more?
I guess this explains why you didn’t explain more why you still endorse “fictional”. Let me clarify: my preferences are that the original discussion didn’t get side-tracked, but once we’re already side-tracked, I don’t think a shorter side-track is necessarily better than a longer one, if for example the longer one is more likely to resolve the disagreement in a way that would prevent future side-tracks like it.
I was hoping that either 1) once you considered my alternative answer and my reasons for why it’s better, you would agree with me that it would have been a good idea to use that instead of “fictional”, in which case we would be able to communicate better in the future and avoid similar side-tracks, or 2) you would disagree and explain why, in a way that makes me realize I’ve been having some false beliefs or behaving suboptimally.
I get the feeling from this that you don’t like rhetorical questions, but I’m not sure if that’s the case, or if it is, why. Do you prefer that I had phrased my comment like the following? (Or let me know if I should just wait for your post to explain this.)
I’m glad to hear this, I much prefer it to David’s interpretation.
Perhaps, but it would be unwise. I have done far more explaining than is optimal already and my model of observed social behavior in this context is not one that predicts reason to change minds. ie. In a context where this kind of disengenuity is above −3 supplying reasons would be an error similar in kind to bringing a knife to a gun fight.
Note that this isn’t to say you are too mind killed to communicate with, rather it is to say that systematic voting and replying based on already intrenched political affiliations would overwhelm any signal regarding the actual subject matter, leaving you an inaccurate perception of how the subject matter is perceived in general.
I don’t mind them, they are appropriate from time to time. I am aware, however, that they are often given privileged status such that answering them directly in a way that doesn’t support the implied argument is sometimes considered ‘missing the point’ rather than rejecting it. Rhetorical questions are a powerful dark arts technique and don’t need additional support and encouragement when they fail.
Absolutely. Or, rather, if you had believed as David did that the answer to the question was pretty damn obviously “No” then your original comment would be a far more personal act of aggression than this one would have been. But I don’t think this is because it was a rhetorical question but rather because it would be a form that is more personal, presumptive, condescending and disingenuous. The only general problem with ‘rhetorical questions’ that would be pertinent is that they are often just as socially effective at supporting bullshit as supporting coherent positions. (The ‘bullshit’ here refers to the countefactually-known-to-be-false assumption that I would agree with you if I reflected. It does not apply if either you were sincerely in doubt or you used the revised argument form).
I disagree. I think you probably have a bias in how you interpret voting patterns, and the situation is not as politicized as you think. However, I am more curious about what your reasons are than how others judge your reasons, so if you continue to worry about giving me an inaccurate perception of how the subject matter is perceived in general, please send me a PM with your reasons.
It seems to me that rhetorical questions are more of a dark arts technique when you’re making a speech and can use them to lead your audience to a desired conclusion. In a debate or discussion on the other hand, it seems easy to counter a rhetorical question by laying out the implied argument and then pointing out whatever flaws might exist in it. I think I often use rhetorical questions for hedging:
which seems like a pretty reasonable use.
I gave a specific example near the context of this quote and that comment is actually representative of the specific subset of rhetorical questioning that I hold in contempt. If you are right and I am incorrect about the merits of such comments then I would consider myself so fundamentally confused when reasoning about the quality of comments like those that anything I have to say about that topic really is almost worthless. There is a corollary there as well.
Debates are roughly equivalent to (or a subset of) speeches when it comes to rhetoric use. Discussions are different. Note that if rhetorical questions of the kind david describes (where the speaker believes the recipient almost certainly disagrees with the implied answer but the speaker wants to persuaded the audience) and this is done in the context of “discussion” then the speaker is being disingenuous and it is really a debate or speech to the audience.
Yes. You should note that most of the grandparent consisted of saying that rhetorical questions per se aren’t something I oppose.
(Note that I do believe it is unwise for me to continue this conversation. While I succumbed to the temptation to respond to textual stimulus with this comment you may consider me weakly-to-moderately precommitted to not responding further.)
You may well be right about the merits of comments like that, but wrong about the situation being very political. Maybe people are refraining from voting comments like it down because they do not recognize their low merit, rather than because of political affiliations. On the other hand, if you are wrong about the quality of those comments, saying what you have to say is still not worthless because by doing so you may be convinced that you are wrong (e.g., if you explained your reasons fully then someone could perhaps point out a flaw in them that you missed before), which would be a benefit to yourself as well as to the LW community.
So I don’t think this is a good reason for stopping. What would be a good reason is if there’s a good chance you’ll actually collect and organize what you have to say into a post, in which I’ll be patient and look forward to it.
Yes, I believe that they don’t recognize the low merit.
An expected utility calculation applies and my estimation is that I have erred on the side of too much explaining, not too little.
Another good reason would be that I find arguing with you about what posts should be made to be both fruitless and unpleasant. I find that the difference in preferences, assumptions and beliefs constitute an inferential distance that does not seem to be successfully crossed—I don’t find I learn anything from your exhortations and don’t expect to convince you of anything either. Note that I applied rudimentary tact and mentioned only the contextual reason because no matter how many caveats I include it is always going to come across as more personal and rude than I intend to be (where that intent would be the minimum possible given significant disagreement).
Since this is something of a pattern you should note that a tendency to make it difficult to end conversations with you gracefully makes it less practical to engage in such conversations in the first place. Let’s assume that you are right and the reason expressed for withdrawing was a bad one—for emphasis, let’s even assume that for some reason me ending a particular conversation is both epistemically and instrumentally irrational as well as immoral. Even in such a case you choosing push a frame where I should continue a conversation or should explain myself to you or others would still give incentive to avoid the conversation if my foresight allows, to avoid the awkwardness and anticipated social cost.
What I am saying is that there is a tradeoff to making comments like the parent. It may achieve some goals that you could have (persuasion of someone regarding the wrongess of ending a particular conversation perhaps) but come with the cost or reducing the likelyhood of future engagement. Whether that trade off is worth it depends on your preferences and what you are trying to achieve.
Ok, I think I figured it out. It seems rather obvious in retrospect and I’m not sure what took me so long.
You have a very different view of the current state of LW than I do. Whereas I see mostly reasonable efforts at truth seeking with only occasional forays into politics, you see a lot more social aggression and political fights. Whereas I think komponisto’s comment was at worst making a honestly mistaken point or asking a badly phrased question, you interpret it as dark arts and/or social aggression, and think that the appropriate response is a counterattack/punishment, which is good for LW because it would deter such aggression/dark arts from him and others in the future. I guess that from your perspective, “fictional” serves as such a counterattack/punishment, whereas my suggested answer would only blunt his attack but not deliver a counter-punch.
If my guess is correct, I’m quite alarmed. Your view of LW has the potential to become a self-fulfilling prophecy, because if you are wrong about the current state of LW, by treating others as enemies when they are just honestly mistaken or phrasing things badly, you’re making them into enemies and politicizing discussions that weren’t political to begin with. Furthermore you’re a very prolific commenter and viewed as a role model by a significant number of other LWers who may adopt your assessment and imitate your behavior, thereby creating a downward spiral of LW culture.
I would urge you to reconsider, but since you don’t like my exhortations, I feel like I should at least indicate to others that there is significant disagreement about whether your assessment and behavior are normative.
Did the fictional Joker matter have something to do with politics? Am I missing something? Or do you mean politics in the sense of “Activities concerned with the acquisition or exercise of authority or status”?
Question: is it your sense that wedrifid views LessWrong as unusually ridden with social aggression, or views komponisto’s comment as demonstrating exceptional social aggression? Or merely that he views these things as containing social aggression, like most forums and exchanges?
As an answer to the slightly different question of what Wedrifid sees himself seeing, it would be probably less than most forums and in general typical of human interactions. In fact, seeing a human community without any social aggression would just be creepy and probably poorly functioning unless the humans were changed in all sorts of ways to compensate.
(nods) FWIW, I’m entirely unsurprised by this. What I’m not quite sure of is whether Wei Dai shares our view of what you believe in this space. I’m left with a niggling suspicion that you and he are not using certain key terms equivalently.
This is almost certainly the case, and one of the things that made conversation difficult.
I disagree with Wei Dai on all points in the parent and find his misrepresentation of me abhorrent (even though he is quite likely to be sincere). I hope that Wei Dai’s ability to persuade others of his particular mind-reading conclusion is limited. My most practical course of action—and the one I will choose to take—seems to be that of harm minimisation. I will not engage with—or, in particular, defend myself against—challenges by Wei Dai beyond a one sentence reply per thread if that happens to be necessary.
I have been making this point from the start. That which Wei Dai chooses to most actively and strongly defend tends to be things that are bad for the site (see the aggressive encouragement of certain kinds of ‘contrarians’ in particular). I also acknowledged that Wei Dai’s perspective would almost certainly be the reverse.
I’m confused. I expect saying “your interpretation of my model of LW is wrong, I’m not seeing that much of political fighting on LW” would be sufficient for changing Wei’s mind. As it is, your responses appear to be primarily about punishing the very voicing of (incorrect) guesses about your (and others’) beliefs or motives, as opposed to clarifying those beliefs and motives. (The effect it has on me is for example that I’ve just added the “appear to be” disclaimer in the preceding sentence, and I’m somewhat afraid of talking to you about your beliefs or motives.)
Why this tradeoff? I’d like the LW culture to be as much on the ask side as possible, and punishing for voicing hypotheses (when they are wrong) seems to push towards the covert uninformed guessing.
Sort of- punishing guessing also makes the “what are your goals here?” question more attractive relative to the “I think your goals are X. Am I right?” question.
That said, I agree that discouraging voicing hypotheses should be done carefully, because I agree that LW culture should be closer to ask than guess.
Thankyou for adding the disclaimer. My motives in that comment were not primarily about punishing the public declaration of false, negative motives and instead the following of practical incentives I spent three whole paragraphs patiently explaining in the preceeding comment. It would have been worse to make an unqualafied public declaration that my motives were that which they were not, in direct contradiction to my explicitly declared reasoning, than a qualified one. After all, “appear” is somewhat subjective such that mind of the observer is able to perceive whatever it happens to perceive and your perceptions can constitute a true fact about the world regardless of whether they are accurate perceptions.
I would of course prefer it if people refrained from making declarations about people’s (negative) motives (for the purpose of shaming them) out of courtesy, rather than fear. Yet if you don’t believe courtesy to apply and fear happens to reduce the occurrence that is still a positive outcome.
Note that I take little to no offense at you telling people that I am motivated to punish instances of the act “mind read negative motives in others then publicly declare them” because I would endorse that motive in myself and others if they happen to have it. The only reason the grandparent wasn’t an instance of that (pro-social) kind of punishment was because there were higher priorities at the time.
I recently made the observation:
That is something I strongly endorse. It is a fairly general norm in the world at large (or, to be technical, there is a norm that such a thing is only to be done to enemies and is a defection against allies). I consider that to be a wise and practical norm. Thinking that it can be freely abandoned and that such actions wouldn’t result in negative side effects strikes me as naive.
I took it as a personal favor when the user I was replying to in the above removed the talk about some motives that I particularly didn’t want to be associated with (and couldn’t plausibly have been executing). (If I recall the declared motives there implied weakness and stupidity, both of which are more objectionable to me than merely being called ‘evil’.)
People tend to hypothesise negative motives in those they are in conflict with. People also tend to believe what they are told. Communities are much better off when the participants don’t feel free to go around outright declaring (or even just ‘hypothesizing’) that others have motives that they should be shamed for—unless there is a particularly strong reason to make an exception. The ability to criticize actual external behavior is more than sufficient for most purposes.
From my perspective, what I did was to hypothesize that you had the motive to do good but wrong beliefs. The beliefs I attributed to you in my guess was that komponisto’s comment constituted social aggression and/or dark arts, and therefore countering/punishing it would be good for LW.
I do not understand in what sense I hypothesized “negative motives” in you or where I said or implied that you should be shamed (except in the sense that having systematically wrong beliefs might be considered shameful in a community that prides itself on its rationality, but I’m guessing that’s not what you mean).
You said you didn’t punish me in this instance but that you would endorse doing so, and I bet that many of the people you did punish are in the same bewildered position of wondering what they did to deserve it, and have little idea how they’re supposed to avoid such punishments, except by avoiding drawing your attention. The fact that
you do not have just one pet peeve but a number of them,
your frequent refusals to explain your beliefs and motives when asked,
your tendency to further punish people for more perceived wrongs while they are trying to understand what they did wrong or trying to explain why you may be mistaken about their wrongness, and
your apparent akrasia regarding making posts that might explain how others could avoid being punished by you,
All of these do not help. And I note that since you like to defend people besides yourself against perceived wrongs, there is no reliable way to avoid drawing your attention except by not posting or commenting.
EDIT: This reply applies to a previous version of the parent. I’m not sure whether it applies to the current version since just a glance at the new bulleted list was too much.
Yes, were I to have actually objected in this manner to you comment I clearly would have objected to the attribution of “false beliefs result in ” based on untenable mind-reading and not “sinister motives”. You will note that Vladimir referred to both. As it happens I was not executing punishment of either kind and so chose to discuss insinuation of false motives rather than insinuation of toxic beliefs because objecting to the former was the stance I had already taken recently and is the one most significantly objectionable.
You will note that “punishment” here refers to nothing more than labeling a thing and saying it is undesirable. In recent context it refers to the following, in response to some rather… dramatic and inflammatory motives:
I do endorse such a response. It is a straightforward and rather clearly explained assertion of boundaries. Yes, a technical analysis of the social implications makes such boundary assertion and the labeling of behaviors as ‘rude’ entails a form of ‘punishment’.
This is an (arguably) nuanced and low level analysis of how social behaviors work and I note that by the same analysis your own comments tend to be heavily riddled with both punishments and threats. Since this is an area where you use words differently and tend to make objections in response to low level analysis I will note explicitly that under more typical definitions of ‘punishment’ that would not describe your behavior as frequently having the social implication of punishment I would also reject that word applying to most of what I do.
I assert that there is no instance where I have ‘punished’ people for accusing me of believing things or having motives that I do not have where I have not been abundantly clear about what I am objecting to. Because not only is this not something that comes up frequently the punishment consists of nothing more than the explanation itself. This can plausibly be described as ‘punishment’ in as much as it entails providing potentially negative utility in response to undesired stimulus but if that punishment is recognized as punishment then the meaning is already clear.
No Wei. I give an excessive amount of explanation of motives. In fact it wouldn’t surprise me if I provide more and more detailed explanations of this kind than anyone on the site—partly because I comment frequently but mostly because such things happen to be of abstract decision theoretical interest to me. Once again, I don’t like being forced into a corner where I have to speak plainly about something some would take personally but you really seem set on pushing the issue here. I have already explained in this thread:
“The definition of insanity” may be hyperbole but it remains the case that doing the same thing again and again while expecting different results is foolish. I sincerely believe that explanations to you specifically have next to no chance of achieving a desired goal and that giving them to you will continue to be detrimental to me, as I have found it to be in the past. For example the parent primes people to apply interpretations to my comments that I consider ridiculous. All your other comments in this thread can be presumed to have some influence in that direction as well, making it more difficult to make people correctly interpret my words in the future and generally interfering with my ability to communicate successfully. If I didn’t reply to you I would not have given you a platform from which to speak and influence others. You would have just been left with your initial comment and if you had kept making comments like “Non-explanatory punisher!” without me engaging you would have just looked like a stalker.
Anyhow it would seem that my unfortunate bias to explain myself when it would be more rational to ignore has struck again.
You do explain things, but simultaneously you express judgment about the error, which distracts (and thereby detracts) from the explanation. It doesn’t seem to be the case that the punishment consists only of the explanation. An explanation would be stating things like “I don’t actually believe this”, while statements like “Nothing I have said suggests this. Indeed, this is explicitly incompatible with my words as I have written them and it is bizarre that it has come up.” communicate your judgment about the error, which is additional information that is not particularly useful as part of the explanation of the error. Also, discussing the nature of the error would be even more helpful than stating what it is, for example in the same thread Wei still didn’t understand his error after reading your comment, while Vaniver’s follow-up clarified it nicely: “his point is that if you misunderstand the dynamics of the system, then you can both have the best motives and the worst consequences” (with some flaws, like saying “best”/”worst”, but this is beside the point).
(I didn’t refer to either, I was speaking more generally than this particular conversation. Note how this is an explanation of the way in which your guess happens to be wrong, which is distinct from saying things like “your claims to having mind-reading abilities are abhorrent” etc.)
Are significant. It does matter whether or not actual words expressed are being ignored or overwhelmed by insinuations and ‘hypotheses’ that the speaker believes and would have others believe. It is not-OK to say that people believe things that their words right there in the context say something completely different.
Yes, that is intended. The error is a social one for which it is legitimate to claim offense. That is, to judge that the thing should not be done and suggest to observers also consider that said thing should not be done. Please see my earlier explanation regarding why outlawing the claiming of offense for this type of norm violation is considered detrimental (by me and, implicitly, by most civilised social groups). The precise details of how best to claim offense can and should be optimised for best effect. I of course agree that there is much that I could do to convey my intended point in such a way that I am most likely to get my most desired outcomes. Yet this remains an optimisation of how to most effectively convey “No, incompatible, offense”.
So was I, with the statement this replies to.
So no, it isn’t.
I understand that, my point is that this is the part of the punishment that explains something other than the object-level error in question, which is the distinction Wei was also trying to make.
(I guess my position on offense is that one should deliberately avoid taking or expressing offense in all situations. There are other modes of social enforcement that don’t have offense’s mind-killing properties.)
Okay.
That doesn’t seem right, although perhaps you define “offence claiming” more narrowly than I. I’m talking about anything up from making the simple statement “this shouldn’t be done”. Basically the least invasive sort of social intervention I can imagine, apart downvoting and body language indications—but even then my understanding is that is where most communication along the lines of ‘offense taking’ actually happens.
I highly value LessWrong and can’t think of any reasons why I would want to do it harm. My past attempts to improve it seems to have met with wide approval (judging from the votes, which are generally much higher than my non-community-related posts), which has caused me to update further in the direction of thinking that my efforts have been helpful instead of harmful.
I understand you don’t want to continue this conversation any further, so I’ll direct the question to others who may be watching this. Does anyone else agree with Wedrifid’s assessment, and if so can you tell me why? If it seems too hard to convince me with object-level arguments, I would also welcome a psychological explanation of why I have this tendency to defend things that are bad for LW. I promise to do my best not to be offended by any proposed explanations.
Nothing I have said suggests this. Indeed, this is explicitly incompatible with my words as I have written them and it is bizarre that it has come up. Once again, to be even more clear, Wei Dai’s sincerity and pro-social intent have never been questioned. Indeed, I riddled the entire preceding conversation from my first reply onward with constant disclaimers to that effect to the extent that I would have considered any more to be outright spamming.
I’m saying that I can’t think of any reasons, including subconscious reasons, why I might want to do it harm. It seems compatible with your words that I have no conscious reasons but do have subconscious reasons.
I suspect his point is that if you misunderstand the dynamics of the system, then you can both have the best motives and the worst consequences.
Or, far more likely, having the best motives and getting slightly bad consequences. Having the worst consequences is like getting 0 on a multiple-choice test or systematically losing to an efficient market. Potentially as hard as getting the best consequences and a rather impressive achievement in itself.
Ok, so does anyone agree that he is right (that I misunderstand the dynamics of the system), and if so, tell me why?
(sigh) OK, my two cents.
I honestly lost track of what you and wedrifid were arguing about way back when. It had something to do with whether “fictional” was a useful response to someone asking about how to categorize characters like the Joker when it comes to the specifics of their psychological quirks, IIRC, although I may be mistaking the salient disagreement for some other earlier disagreement (or perhaps a later one).
Somewhere along the line I got the impression that you believe wedrifid’s behavior drags down the general quality of discourse on the site (either on net, or relative to some level of positive contribution you think he would be capable of if he changed his behavior, I’m not sure which) by placing an undue emphasis on describing on-site social patterns in game-theoretical terms. I agree that wedrifid consistently does this but I don’t consider it a negative thing, personally.
[EDIT: To clarify, I agree that wedrifid consistently describes on-site social patterns in game-theoretical terms; I don’t agree with “undue emphasis”]
I do think he’s more abrupt and sometimes rude (in conventional social terms) in his treatment of some folks on this site than I’d prefer, and that a little more consistent kindness would make me more comfortable. Then again, I think the same thing of a lot of people, including most noticeably Eliezer; if the concern is that he’s acting as some kind of poor role model in so doing, I think that ship sailed with or without wedrifid.
I’m less clear on what wedrifid’s objection to your behavior is, exactly, or how he thinks it damages the site. I do think that Vaniver’s characterization of what his objection is is more accurate than your earlier one was.
[EDIT: Reading this comment, it seems one of the things he objects to is you opposing his opposition to engaging with Dmitry. For my own part, I think engaging with Dmitry was a net negative for the site. Whether opposing opposition to Dmitry is also a net negative, I don’t really know, but it’s certainly plausible.]
I realize this isn’t really an answer to your question, but it’s the mental model I’ve got, and since you seem rather insistent on getting some sort of input on this I figured I’d give you what I have. Feel free to ask followup questions if you like. (Or not.)
The difference between Eliezer and wedrifid is that wedrifid endorses his behavior much more strongly and frequently. With Eliezer, one might think it’s just a personality quirk, or an irrational behavioral tendency that’s an unfortunate side effect of having high status, and hence not worthy of imitation.
I didn’t mean to sound very confident (if I did) about my guess of his objection. My first guess was that he and I had a disagreement over how LW currently works, but then he said “I disagree with Wei Dai on all points in the parent” which made me update towards this alternative explanation, which he has also denied, so now I guess the reason is a disagreement over how LW works, but not the one that I specifically gave. (In case someone is wondering why I keep guessing instead of asking, it’s because I already asked and wedrifid didn’t want to answer, even privately.)
Thanks! What I’m most anxious to know at this point is whether I have some sort of misconception about the social dynamics on LW that causes me to consistently act in ways that are harmful to LW. Do you have any thoughts on that?
I certainly agree with you about frequently. I have to think more about strongly, but off hand I’m inclined to disagree. I would agree that wedrifid does it more explicitly, but that isn’t the same thing at all.
Haven’t a clue. I’m not really sure what “harmful to LW” even means.
Perhaps unpacking that phrase is a place to start. What do you think harms the site? What do you think benefits it?
The difference needn’t lie in your motives, conscious or unconscious. You might simply have bad theories about how groups develop. (A possibility: your tendency to understate the role of social signaling in what sometimes pretends to be an objective search for truth.)
But your blindness to potential motives is also problematic—and not just because of the motives themselves, if they exist. For an example of a motive, you might have an anti-E.Y. motive because he hasn’t taken your ideas on the Singularity as seriously as you think they deserve—giving much more attention to a hack job from GiveWell.
Well, you wanted a possible example. There are always possible examples.
Let it be known that I, Wedrifid, at this time and at this electronic location do declare that I do not believe that Wei Dai has conscious or unconscious motives to sabotage lesswrong. Indeed the thought is so bizarre and improbable that it was never even considered as a possibility by my search algorithm until Wei brought it up.
It really seems much more likely to me that Wei really did think that chastising those who tried to prevent the feeding of Dmytry was going to help the website rather than damage it. I also believe that Wei Dai declaring war on “Fictional” as a response to “What do you call the Joker?” is based on a true, sincere and evidently heartfelt belief that the world would be a better place without “fictional” (or analogous answers) as a reply in similar contexts.
Enemies are almost never innately evil. (Another probably necessary caveat: That word selection is merely a reference to a post that contains the relevant insight. Actual enemy status is not something to be granted so frivolously. Actively considering agents enemies rather than merely obstacles involves a potentially significant trade-off when it comes to optimization and resource allocation and so is best reserved for things that really matter.)
It is not clear to me that the distinction between a discussion that takes place in public, and speech to an audience, is as crisp as you seem to suggest here.
I did not intend to suggest any crisp distinction. Indeed, I was trying to weaken the ‘crispness of distinction’ from the preceding comment.
Then I completely misunderstood “Debates are roughly equivalent to (or a subset of) speeches when it comes to rhetoric use. Discussions are different. ”
If your precommitment to not respond further doesn’t extend to include spinoff discussions like the one I’m implicitly starting here, then I encourage you to clarify my understanding if possible. But if it does, that’s OK too.
Something like a spectrum, with some things being more clearly debate like and some things being more clearly discussion like. Also assume an “I’ll concede that” before “discussions are different”.
If a third-party observer’s perspective helps: your preferences seemed sufficiently predictable to me that I’d tentatively understood Wei Dai’s question as primarily a rhetorical one, intended to indirectly convey the suggestion that it would have been better to give such a response.
I was wary of making that suggestion because that would mean the whole “avoid a lot of the subsequent side-track into whether ‘fictional’ is a sensible answer or not” was more overtly insincere and hypocritical than I expect wei_dai to be. If I hadn’t given Wei this benefit of the doubt I would not have answered straightforwardly as I did and instead had to evaluate how best to mitigate the damage from unwelcome social aggression.
Failure mode: My “something to protect” is to spread rationality throughout the world and to raise the sanity waterline, which is best achieved by having my own rationality dojo.
Beware the meta.
I agree. I think that failure mode might then be better avoided by restricting possible “somethings”, as opposed to adding another requirement on to one’s reasons for wanting to be rational.
Yes, but that’s an exercise implicitly left to the reader. Formulating it this way is somewhat intuitively easier to understand, and if you’ve read the other sequences this should be simple enough to reduce to something that pretty much fits (restriction of “things to protect”) in beliefspace.
Essentially, this article, the way I understand it, mostly points at an “empirical cluster in conceptspace” of possible failure modes, and proposes possible solutions to some of them, so that the reader can deduce and infer the empirical cluster of solutions to those failure modes.
The general rule could be put as “Make rationality your best means, but never let it become an end in any way.”—though I suspect that I’m making a generalization that’s a bit too simplistic here. I’ve been reading the sequences in jumbled order, and I’m particularly bad at reduction, which is one of the Sequences I haven’t finished reading yet.