a harder time grasping a given technique if they’ve already anchored themselves on an incomplete understanding
This is certainly theoretically possible, but I’m very suspicious of it on reversal test grounds: if additional prior reading is bad, then why isn’t less prior reading even better? Should aspiring rationalists notread the Sequences for fear of an incomplete understanding spoiling themselves for some future $3,900 CfAR workshop? (And is it bad that I know about the reversal test without having attended a CfAR workshop?)
I feel the same way about schoolteachers who discourage their students from studying textbooks on their own (because they “should” be learning that material by enrolling in the appropriate school course). Yes, when trying to learn from a book, there is some risk of making mistakes that you wouldn’t make with the help of a sufficiently attentive personal tutor (which, realistically, you’re not going to get from attending lecture classes in school anyway). But given the alternative of placing my intellectual trajectory at the mercy of an institution that has no particular reason to care about my welfare, I think I’ll take my chances.
Note that I’m specifically reacting to the suggestion that people not read things for their own alleged benefit. If the handbook had just said, “Fair warning, this isn’t a substitute for the workshop because there’s a lot of stuff we don’t know how to teach in writing,” then fine; that seems probably true. What I’m skeptical of is hypothesized non-monotonicity whereby additional lower-quality study allegedly damages later higher-quality study. First, because I just don’t think it’s true on the merits: I falsifiably predict that, e.g., math students who read the course textbook on their own beforehand will do much better in the course than controls who haven’t. (Although the pre-readers might annoy teachers whose jobs are easier if everyone in the class is obedient and equally ignorant.) And second, because the general cognitive strategy of waiting for the designated teacher to spoonfeed you the “correct” version carries massive opportunity costs when iterated (even if spoonfeeding is generally higher-quality than autodidactism, and could be much higher-quality in some specific cases).
You use math as an example, but that’s highly focused on System 2 learning. That suggests that you have false assumptions about what CFAR is trying to teach.
There are many subjects where written instructions are much less valuable than instruction that includes direct practice: circling, karate, meditation, dancing, etc. Most of those analogies are fairly imperfect, and some have partially useful written instructions (in the case of meditation, the written version might have lagged in-person instruction by many centuries). Circling is the example that I’d consider most apt, but it won’t mean much to people who haven’t taken a good circling workshop.
A different analogy, which more emphasizes the costs of false assumptions: people often imagine that economics teaches something like how to run a good business or how to predict the stock market, because there isn’t any slot in their worldview for what a good economics course actually teaches. There are plenty of mediocre executive summaries of economics, which fail to convey to most people that economics requires a pervasive worldview shift (integrating utilitarianism, empiricism about preferences, and some counterintuitive empirical patterns).
The CFAR handbook is more like the syllabus for an economics course than it is like an economics textbook, and a syllabus is useless (possibly harmful) for teaching economics to people who have bad assumptions about what kind of questions economics answers. (This analogy is imperfect because economics textbooks have been written, unlike a CFAR textbook.)
Maybe CFAR is making a mistake, but it appears that the people who seem most confident about that usually seem to be confused about what it is that CFAR is trying to teach.
Reading the sequences, or reading about the reversal test, are unlikely to have much relevance to what CFAR teaches. Just be careful not to imagine that they’re good examples of what CFAR is about.
Sometimes, we don’t know how to teach a subject in writing because the subject matter is inherently about action (rather than concepts, analysis, explanation, prediction, numbers, words, etc.).
But sometimes, we don’t know how to teach a subject in writing because there is, in fact, nothing (or, at best, nothing much) to be taught. Sometimes, a subject is actually empty (or mostly empty) of content.
In the latter case, attempting to write it down reveals this (and opens the alleged “content” to criticism)—whereas in person, the charisma of the instructors, the social pressure of being in a group of others who are there to receive the instruction, possibly the various biases associated with having made some costly sacrifice (time, money, etc.) to be there, possibly the various biases associated with the status dynamics at play (e.g. if the instructors are respected, or at least if those around you act as if they are), all serve to mask the fundamental emptiness of what is being “taught”.
I leave it to the reader to discern which of the given examples fall into which category. I will only note that while the subjects found in the former category are often difficult to teach, nevertheless one’s mastery of them, and their effectiveness, is usually quite easy to verify—because action can be demonstrated.
Meditation is action, in some important sense, and mostly can’t be demonstrated.
It is hard to reliably distinguish between the results of peer pressure and actual learning. I think CFAR’s best reply to this has been it’s refund policy: last I knew they offered full refunds to anyone who requested it within one year (although I can’t find any online mention of their current policy).
Meditation is action, in some important sense, and mostly can’t be demonstrated.
Everything is “action” in “some sense”. (Whether that sense is “important”, in any given case, is a matter of perspective.)
As far as I am concerned—for the purposes of this topic—if it can’t be demonstrated, it ain’t action.
It is hard to reliably distinguish between the results of peer pressure and actual learning.
I submit to you that if this is true of any given case, then that is an excellent signal that no actual learning has taken place. (And the more true it is—the harder it is to distinguish between actual learning and the results of various biases, social pressure included—the stronger the signal is.)
There are many subjects where written instructions are much less valuable than instruction that includes direct practice: circling, karate, meditation, dancing, etc.
Yes, I agree: for these subjects, the “there’s a lot of stuff we don’t know how to teach in writing” disclaimer I suggested in the grandparent would be a big understatement.
a syllabus is useless (possibly harmful) for teaching economics to people who have bad assumptions about what kind of questions economics answers
Useless, I can believe. (The extreme limiting case of “there’s a lot of stuff we don’t know how to teach in this format” is “there is literally nothing we know how to teach in this format.”) But harmful? How? Won’t the unexpected syllabus section titles at least disabuse them of their bad assumptions?
Reading the sequences [...] are unlikely to have much relevance to what CFAR teaches.
Really? The tagline on the website says, “Developing clear thinking for the sake of humanity’s future.” I guess I’m having trouble imagining a developing-clear-thinking-for-the-sake-of-humanity’s-future curriculum for which the things we write about on this website would be irrelevant. The “comfort zone expansion” exercises I’ve heard about would qualify, but Sequences-knowledge seems totally relevant to something like, say, double crux.
(It’s actually pretty weird/surprising that I’ve never personally been to a CfAR workshop! I think I’ve been assuming that my entire social world has already been so anchored on the so-called “rationalist” community for so long, that the workshop proper would be superfluous.)
The idea that CFAR would be superfluous is fairly close to the kind of harm that CFAR worries about. (You might have been right to believe that it would have been superfluous in 2012, but CFAR has changed since then in ways that it hasn’t managed to make very legible.)
I think meditation provides the best example for illustrating the harm. It’s fairly easy to confuse simple meditation instructions (e.g. focus on your breath, sit still with a straight spine) with the most important features of meditation. It’s fairly easy to underestimate the additional goals of meditation, because they’re hard to observe and don’t fit well with more widely accepted worldviews.
My experience suggests that getting value out of meditation is heavily dependent on a feeling (mostly at a system 1 level) that I’m trying something new, and there were times when I wasn’t able to learn from meditation, because I mistakenly thought that focusing on my breath was a much more central part of meditation than it actually is.
The times when I got more value out of meditation were times when I tried new variations on the instructions, or new environments (e.g. on a meditation retreat). I can’t see any signs that the new instructions or new environment were inherently better at teaching meditation. It seems to have been mostly that any source of novelty about the meditation makes me more alert to learning from it.
My understanding is that CFAR is largely concerned that participants will mistakenly believe that they’ve already learned something that CFAR is teaching, and that will sometimes be half-true—participants may know it at a system 2 level, when CFAR is trying to teach other parts of their minds that still reject it.
I think I experienced that a bit, due to having experience with half-baked versions of early CFAR before I took a well-designed version of their workshop. E.g. different parts of my mind have different attitudes to acknowledging my actual motivations when they’re less virtuous than the motivations that my system 2 endorses. I understood that pretty well at some level before CFAR existed, yet there are still important parts of my mind that cling to self-deceptive beliefs about my motives.
CFAR likely can’t teach a class that’s explicitly aimed at that without having lots of participants feel defensive about their motives, in a way that makes them less open to learning. So they approach it via instruction that is partly focused on teaching other things that look more mundane and practical. Those other things often felt familiar enough to me that I reacted by saying: I’ll relax now and conserve my mental energy for some future part of the curriculum that’s more novel. That might have led me to do the equivalent of what I did when I was meditating the same way repeatedly without learning anything new. How can I tell whether that caused me to miss something important?
A key problem is moving from knowing about a technique to action. When you know 20 techniques and use no one of them it’s harder to get you to actually use the 21st technique that you are taught then if you start out with less techniques in your head and have no established pattern of not doing any of the exercises that you were taught.
There’s less that needs unlearning if you haven’t been exposed to material beforehand.
I would still err on the side of being more public with information, but I do understand that there is a tradeoff.
I can offer an explanation that might fit. Rationalists tend toward expertise mode thinking (expert from the torbert action logic framework). Behaviour like reading the book is in line with the expert behaviour.
Cfar techniques and related in-person methods are not always about being the expert, they are about doing the best thing. Being a better expert is not always the same as being the better munchkin, the better person or the person who can step out of their knowledge beliefs.
In theory, the expert thing is the best thing. In theory there’s no difference between theory and practice, in practice, there’s a big difference between theory and practice.
Having said that, I’ve never done cfar, and I teach workshops monthly in Sydney and I think they are wrong to discourage sharing of their resources. As the same time I accept the idea of intellectual property being protected even if that’s not the case they are claiming.
(I’m in the process of writing up my resources into a collection)
As the same time I accept the idea of intellectual property being protected even if that’s not the case they are claiming.
I suspect that this is the real reason. Although if the much vaster sequences by Yudkowsky are freely available I don’t see it as a good justification for not making the CFAR handbook available.
It’s pretty uncharitable of you to just accuse CfAR of lying like that! If the actual reason were “Many of the explanations here are intentionally approximate or incomplete because we predict that this handbook will be leaked and we don’t want to undercut our core product,” then the handbook would have just said that.
Wait, are you invoking the principle of charity as an epistemic axiom (“assume people don’t lie”)? Why would that be truth-aligned at all?
If you didn’t mean to invoke the principle of charity, why not just say it’s likely to be incorrect based on priors, CFAR’s reputation, etc, instead of using the word “uncharitable” as an insult?
You caught me—introspecting, I think the grandparent was written in a spirit of semi-deliberate irony. (“Semi” because it just felt like the “right” thing to say there; I don’t think I put a lot of effort into modeling how various readers would interpret it.)
Roland is speculating that the real reason for intentionally incomplete explanations in the handbook is different from the stated reason, and I offered a particularly blunt phrasing (“we don’t want to undercut our core product”) of the hypothesized true reason, and suggested that that’s what the handbook would have said in that case. I think I anticipated that a lot of readers would find my proposal intuitively preposterous: “everyone knows” that no one would matter-of-factly report such a self-interested rationale (especially when writing on behalf of an organization, rather than admitting a vice among friends). That’s why the earlierscenesinthe 2009 film The Invention of Lying, or your post “Act of Charity”, are (typically) experienced as absurdist comedy rather than an inspiring and heartwarming portrayal of a more truthful world.
But it shouldn’t be absurd for the stated reason and the real reason to be the same! Particularly for an organization like CfAR which is specifically about advancing the art of rationality. And, I don’t know—I think sometimes I talk in a way that makes me seem more politically naïve than I actually am, because I feel as if the “naïve” attitude is in some way normative? (“You really think someone would do that? Just go on the internet and tell lies?”) Arguably this is somewhat ironic (being deceptive about your ability to detect deception is probably not actually the same thing as honesty), but I haven’t heretofore analyzed this behavioral pattern of mine in enough detail to potentially decide to stop doing it??
I think another factor might be that I feel guilty about being “mean” to CfAR in the great-great-great grandparent comment? (CfAR isn’t a person and doesn’t have feelings, but my friend who works there is and does.) Such that maybe the emotional need to signal that I’m still fundamentally loyal to the “mainstream rationality” tribe (despite the underlying background situation where I’ve been collaborating with you and Ben and Michael to discuss what you see as fatal deficits of integrity in “the community” as presently organized) interacted with my preëxisting tendency towards semi-performative naiveté in a way that resulted in me writing a bad blog comment? It’s a good thing you were here to hold me to account for it!
I thought your comment was fine and the irony was obvious, but this kind of misunderstanding can be easily avoided by making the straightforward reading more boring, like so:
Given that CfAR is an organization which is specifically about seeking truth, one could safely assume that if the actual reason were “Many of the explanations here are intentionally approximate or incomplete because we predict that this handbook will be leaked and we don’t want to undercut our core product,” then the handbook would have just said that. To do otherwise would be to call the whole premise into question!
Yeah, I would have expected Jessica to get it, except that I suspect she’s also executing a strategy of habitual Socratic irony (but without my additional innovation of immediately backing down and unpacking the intent when challenged), which doesn’t work when both sides of a conversation are doing it.
I actually didn’t get it. I was confused but I didn’t consciously generate the hypothesis that it was ironic.
I think I don’t share the background assumption that it is overwhelmingly obvious that CFAR wouldn’t tell the truth about this in their handbook. I also reflectively endorse a policy of calling out things that could easily be mistaken for sincere (though not obvious sarcasm), in order to ensure common knowledge.
Quick note for your model of how people interpret various kinds of writing, my initial read of your comment was to put a 60% probability on “Zack is currently undergoing a pendulum swing in the direction away from calling people out on lying, and overcompensating.” (which was wrong and/or overconfident on my part)
In theory, the expert thing is the best thing. In theory there’s no difference between theory and practice, in practice, there’s a big difference between theory and practice.
We’ve all heard this sort of thing many times, of course. The best response is probably Schopenhauer’s:
“That’s all very well in theory, but it won’t do in practice.” In this sophism you admit the premisses but deny the conclusion, in contradiction with a well-known rule of logic. The assertion is based upon an impossibility: what is right in theory must work in practice; and if it does not, there is a mistake in the theory; something has been overlooked and not allowed for; and, consequently, what is wrong in practice is wrong in theory too.
the torbert action logic framework
You are, I assume, referring to the ideas of this person? He appears to be some variety of management consultant. Is there any reason to take this “action logic” of his seriously? It seems to be yet another among the many, many self-help / management consulting / etc. “frameworks” or “systems” etc. Do any of his ideas have any empirical verification, or… well, anything, really?
That’s up to you. I’ve got a lot of value from the structure he outlines. It’s a lot more reasoned than some of the other mysterious odd things I read.
If there is something wrong with the theory and the way it maps to the practice, is it better to read more theory or do more practice and make new theories? I would suggest it depends on the person and what they have found to work in the past. And also with an awareness to the loops of bad habits—“sharpen the saw” type problems. Sometimes it’s more valuable to stop sharpening the saw, and start cutting down the tree. (rationality frame of thinking loves to sharpen more and cut less)
This is certainly theoretically possible, but I’m very suspicious of it on reversal test grounds: if additional prior reading is bad, then why isn’t less prior reading even better? Should aspiring rationalists not read the Sequences for fear of an incomplete understanding spoiling themselves for some future $3,900 CfAR workshop? (And is it bad that I know about the reversal test without having attended a CfAR workshop?)
I feel the same way about schoolteachers who discourage their students from studying textbooks on their own (because they “should” be learning that material by enrolling in the appropriate school course). Yes, when trying to learn from a book, there is some risk of making mistakes that you wouldn’t make with the help of a sufficiently attentive personal tutor (which, realistically, you’re not going to get from attending lecture classes in school anyway). But given the alternative of placing my intellectual trajectory at the mercy of an institution that has no particular reason to care about my welfare, I think I’ll take my chances.
Note that I’m specifically reacting to the suggestion that people not read things for their own alleged benefit. If the handbook had just said, “Fair warning, this isn’t a substitute for the workshop because there’s a lot of stuff we don’t know how to teach in writing,” then fine; that seems probably true. What I’m skeptical of is hypothesized non-monotonicity whereby additional lower-quality study allegedly damages later higher-quality study. First, because I just don’t think it’s true on the merits: I falsifiably predict that, e.g., math students who read the course textbook on their own beforehand will do much better in the course than controls who haven’t. (Although the pre-readers might annoy teachers whose jobs are easier if everyone in the class is obedient and equally ignorant.) And second, because the general cognitive strategy of waiting for the designated teacher to spoonfeed you the “correct” version carries massive opportunity costs when iterated (even if spoonfeeding is generally higher-quality than autodidactism, and could be much higher-quality in some specific cases).
You use math as an example, but that’s highly focused on System 2 learning. That suggests that you have false assumptions about what CFAR is trying to teach.
There are many subjects where written instructions are much less valuable than instruction that includes direct practice: circling, karate, meditation, dancing, etc. Most of those analogies are fairly imperfect, and some have partially useful written instructions (in the case of meditation, the written version might have lagged in-person instruction by many centuries). Circling is the example that I’d consider most apt, but it won’t mean much to people who haven’t taken a good circling workshop.
A different analogy, which more emphasizes the costs of false assumptions: people often imagine that economics teaches something like how to run a good business or how to predict the stock market, because there isn’t any slot in their worldview for what a good economics course actually teaches. There are plenty of mediocre executive summaries of economics, which fail to convey to most people that economics requires a pervasive worldview shift (integrating utilitarianism, empiricism about preferences, and some counterintuitive empirical patterns).
The CFAR handbook is more like the syllabus for an economics course than it is like an economics textbook, and a syllabus is useless (possibly harmful) for teaching economics to people who have bad assumptions about what kind of questions economics answers. (This analogy is imperfect because economics textbooks have been written, unlike a CFAR textbook.)
Maybe CFAR is making a mistake, but it appears that the people who seem most confident about that usually seem to be confused about what it is that CFAR is trying to teach.
Reading the sequences, or reading about the reversal test, are unlikely to have much relevance to what CFAR teaches. Just be careful not to imagine that they’re good examples of what CFAR is about.
Sometimes, we don’t know how to teach a subject in writing because the subject matter is inherently about action (rather than concepts, analysis, explanation, prediction, numbers, words, etc.).
But sometimes, we don’t know how to teach a subject in writing because there is, in fact, nothing (or, at best, nothing much) to be taught. Sometimes, a subject is actually empty (or mostly empty) of content.
In the latter case, attempting to write it down reveals this (and opens the alleged “content” to criticism)—whereas in person, the charisma of the instructors, the social pressure of being in a group of others who are there to receive the instruction, possibly the various biases associated with having made some costly sacrifice (time, money, etc.) to be there, possibly the various biases associated with the status dynamics at play (e.g. if the instructors are respected, or at least if those around you act as if they are), all serve to mask the fundamental emptiness of what is being “taught”.
I leave it to the reader to discern which of the given examples fall into which category. I will only note that while the subjects found in the former category are often difficult to teach, nevertheless one’s mastery of them, and their effectiveness, is usually quite easy to verify—because action can be demonstrated.
Meditation is action, in some important sense, and mostly can’t be demonstrated.
It is hard to reliably distinguish between the results of peer pressure and actual learning. I think CFAR’s best reply to this has been it’s refund policy: last I knew they offered full refunds to anyone who requested it within one year (although I can’t find any online mention of their current policy).
Everything is “action” in “some sense”. (Whether that sense is “important”, in any given case, is a matter of perspective.)
As far as I am concerned—for the purposes of this topic—if it can’t be demonstrated, it ain’t action.
I submit to you that if this is true of any given case, then that is an excellent signal that no actual learning has taken place. (And the more true it is—the harder it is to distinguish between actual learning and the results of various biases, social pressure included—the stronger the signal is.)
Yes, I agree: for these subjects, the “there’s a lot of stuff we don’t know how to teach in writing” disclaimer I suggested in the grandparent would be a big understatement.
Useless, I can believe. (The extreme limiting case of “there’s a lot of stuff we don’t know how to teach in this format” is “there is literally nothing we know how to teach in this format.”) But harmful? How? Won’t the unexpected syllabus section titles at least disabuse them of their bad assumptions?
Really? The tagline on the website says, “Developing clear thinking for the sake of humanity’s future.” I guess I’m having trouble imagining a developing-clear-thinking-for-the-sake-of-humanity’s-future curriculum for which the things we write about on this website would be irrelevant. The “comfort zone expansion” exercises I’ve heard about would qualify, but Sequences-knowledge seems totally relevant to something like, say, double crux.
(It’s actually pretty weird/surprising that I’ve never personally been to a CfAR workshop! I think I’ve been assuming that my entire social world has already been so anchored on the so-called “rationalist” community for so long, that the workshop proper would be superfluous.)
The idea that CFAR would be superfluous is fairly close to the kind of harm that CFAR worries about. (You might have been right to believe that it would have been superfluous in 2012, but CFAR has changed since then in ways that it hasn’t managed to make very legible.)
I think meditation provides the best example for illustrating the harm. It’s fairly easy to confuse simple meditation instructions (e.g. focus on your breath, sit still with a straight spine) with the most important features of meditation. It’s fairly easy to underestimate the additional goals of meditation, because they’re hard to observe and don’t fit well with more widely accepted worldviews.
My experience suggests that getting value out of meditation is heavily dependent on a feeling (mostly at a system 1 level) that I’m trying something new, and there were times when I wasn’t able to learn from meditation, because I mistakenly thought that focusing on my breath was a much more central part of meditation than it actually is.
The times when I got more value out of meditation were times when I tried new variations on the instructions, or new environments (e.g. on a meditation retreat). I can’t see any signs that the new instructions or new environment were inherently better at teaching meditation. It seems to have been mostly that any source of novelty about the meditation makes me more alert to learning from it.
My understanding is that CFAR is largely concerned that participants will mistakenly believe that they’ve already learned something that CFAR is teaching, and that will sometimes be half-true—participants may know it at a system 2 level, when CFAR is trying to teach other parts of their minds that still reject it.
I think I experienced that a bit, due to having experience with half-baked versions of early CFAR before I took a well-designed version of their workshop. E.g. different parts of my mind have different attitudes to acknowledging my actual motivations when they’re less virtuous than the motivations that my system 2 endorses. I understood that pretty well at some level before CFAR existed, yet there are still important parts of my mind that cling to self-deceptive beliefs about my motives.
CFAR likely can’t teach a class that’s explicitly aimed at that without having lots of participants feel defensive about their motives, in a way that makes them less open to learning. So they approach it via instruction that is partly focused on teaching other things that look more mundane and practical. Those other things often felt familiar enough to me that I reacted by saying: I’ll relax now and conserve my mental energy for some future part of the curriculum that’s more novel. That might have led me to do the equivalent of what I did when I was meditating the same way repeatedly without learning anything new. How can I tell whether that caused me to miss something important?
A key problem is moving from knowing about a technique to action. When you know 20 techniques and use no one of them it’s harder to get you to actually use the 21st technique that you are taught then if you start out with less techniques in your head and have no established pattern of not doing any of the exercises that you were taught.
There’s less that needs unlearning if you haven’t been exposed to material beforehand.
I would still err on the side of being more public with information, but I do understand that there is a tradeoff.
I can offer an explanation that might fit. Rationalists tend toward expertise mode thinking (expert from the torbert action logic framework). Behaviour like reading the book is in line with the expert behaviour.
Cfar techniques and related in-person methods are not always about being the expert, they are about doing the best thing. Being a better expert is not always the same as being the better munchkin, the better person or the person who can step out of their knowledge beliefs.
In theory, the expert thing is the best thing. In theory there’s no difference between theory and practice, in practice, there’s a big difference between theory and practice.
Having said that, I’ve never done cfar, and I teach workshops monthly in Sydney and I think they are wrong to discourage sharing of their resources. As the same time I accept the idea of intellectual property being protected even if that’s not the case they are claiming.
(I’m in the process of writing up my resources into a collection)
I suspect that this is the real reason. Although if the much vaster sequences by Yudkowsky are freely available I don’t see it as a good justification for not making the CFAR handbook available.
It’s pretty uncharitable of you to just accuse CfAR of lying like that! If the actual reason were “Many of the explanations here are intentionally approximate or incomplete because we predict that this handbook will be leaked and we don’t want to undercut our core product,” then the handbook would have just said that.
Wait, are you invoking the principle of charity as an epistemic axiom (“assume people don’t lie”)? Why would that be truth-aligned at all?
If you didn’t mean to invoke the principle of charity, why not just say it’s likely to be incorrect based on priors, CFAR’s reputation, etc, instead of using the word “uncharitable” as an insult?
You caught me—introspecting, I think the grandparent was written in a spirit of semi-deliberate irony. (“Semi” because it just felt like the “right” thing to say there; I don’t think I put a lot of effort into modeling how various readers would interpret it.)
Roland is speculating that the real reason for intentionally incomplete explanations in the handbook is different from the stated reason, and I offered a particularly blunt phrasing (“we don’t want to undercut our core product”) of the hypothesized true reason, and suggested that that’s what the handbook would have said in that case. I think I anticipated that a lot of readers would find my proposal intuitively preposterous: “everyone knows” that no one would matter-of-factly report such a self-interested rationale (especially when writing on behalf of an organization, rather than admitting a vice among friends). That’s why the earlier scenes in the 2009 film The Invention of Lying, or your post “Act of Charity”, are (typically) experienced as absurdist comedy rather than an inspiring and heartwarming portrayal of a more truthful world.
But it shouldn’t be absurd for the stated reason and the real reason to be the same! Particularly for an organization like CfAR which is specifically about advancing the art of rationality. And, I don’t know—I think sometimes I talk in a way that makes me seem more politically naïve than I actually am, because I feel as if the “naïve” attitude is in some way normative? (“You really think someone would do that? Just go on the internet and tell lies?”) Arguably this is somewhat ironic (being deceptive about your ability to detect deception is probably not actually the same thing as honesty), but I haven’t heretofore analyzed this behavioral pattern of mine in enough detail to potentially decide to stop doing it??
I think another factor might be that I feel guilty about being “mean” to CfAR in the great-great-great grandparent comment? (CfAR isn’t a person and doesn’t have feelings, but my friend who works there is and does.) Such that maybe the emotional need to signal that I’m still fundamentally loyal to the “mainstream rationality” tribe (despite the underlying background situation where I’ve been collaborating with you and Ben and Michael to discuss what you see as fatal deficits of integrity in “the community” as presently organized) interacted with my preëxisting tendency towards semi-performative naiveté in a way that resulted in me writing a bad blog comment? It’s a good thing you were here to hold me to account for it!
I thought your comment was fine and the irony was obvious, but this kind of misunderstanding can be easily avoided by making the straightforward reading more boring, like so:
Yeah, I would have expected Jessica to get it, except that I suspect she’s also executing a strategy of habitual Socratic irony (but without my additional innovation of immediately backing down and unpacking the intent when challenged), which doesn’t work when both sides of a conversation are doing it.
I actually didn’t get it. I was confused but I didn’t consciously generate the hypothesis that it was ironic.
I think I don’t share the background assumption that it is overwhelmingly obvious that CFAR wouldn’t tell the truth about this in their handbook. I also reflectively endorse a policy of calling out things that could easily be mistaken for sincere (though not obvious sarcasm), in order to ensure common knowledge.
Quick note for your model of how people interpret various kinds of writing, my initial read of your comment was to put a 60% probability on “Zack is currently undergoing a pendulum swing in the direction away from calling people out on lying, and overcompensating.” (which was wrong and/or overconfident on my part)
I wasn’t, I rather suspect them of being biased.
We’ve all heard this sort of thing many times, of course. The best response is probably Schopenhauer’s:
You are, I assume, referring to the ideas of this person? He appears to be some variety of management consultant. Is there any reason to take this “action logic” of his seriously? It seems to be yet another among the many, many self-help / management consulting / etc. “frameworks” or “systems” etc. Do any of his ideas have any empirical verification, or… well, anything, really?
That’s up to you. I’ve got a lot of value from the structure he outlines. It’s a lot more reasoned than some of the other mysterious odd things I read.
If there is something wrong with the theory and the way it maps to the practice, is it better to read more theory or do more practice and make new theories? I would suggest it depends on the person and what they have found to work in the past. And also with an awareness to the loops of bad habits—“sharpen the saw” type problems. Sometimes it’s more valuable to stop sharpening the saw, and start cutting down the tree. (rationality frame of thinking loves to sharpen more and cut less)