I was preparing to write a reply to the effect of “this is the most useful comment about what CFAR is doing and why that’s been posted on this thread yet” (it might still be, even)—but then I got to the part where your explanation takes a very odd sort of leap.
But we looked around, and noticed that lots of the promising people around us seemed particularly bad at extrospection—i.e., at simulating the felt senses of their conversational partners in their own minds.
It’s entirely unclear to me what this means, or why it is necessary / desirable. (Also, it seems like you’re using the term ‘extrospection’ in a quite unusual way; a quicksearchturns upno hitsfor anything like the definition you just gave. What’s up with that?)
This seemed worrying, among other reasons because early-stage research intuitions (e.g. about which lines of inquiry feel exciting to pursue) often seem to be stored sub-verbally.
There… seems to be quite a substantial line of reasoning hidden here, but I can’t guess what it is. Could you elaborate?
So we looked to specialists in extraspection for a patch.
Is there some reason to consider the folks who purvey (as you say) “woo-laden authentic relating games” to be ‘specialists’ here? What are some examples of their output, that is relevant to … research intuitions? (Or anything related?)
In short: I’m an engineer (my background is in computer science), and I’ve also studied philosophy. It’s clear enough to me why certain things that look like ‘philosophy’ can be, and are, useful in practice (despite agreeing with you that philosophy as a whole is, indeed, “an unusually unproductive field”). And we do, after all, have the Sequences, which is certainly philosophy if it’s anything (whatever else it may also be).
But I don’t at all see what the case for the usefulness of ‘circling’ and similar woo might be. Your comment makes me more worried, not less, on the whole; no doubt I am not alone. Perhaps elaborating a bit on my questions above might help shed some light on these matters.
Is there some reason to consider the folks who purvey (as you say) “woo-laden authentic relating games” to be ‘specialists’ here? What are some examples of their output, that is relevant to … research intuitions? (Or anything related?)
I’m speaking for myself here, not any institutional view at CFAR.
When I’m looking at maybe-experts, woo-y or otherwise, one of the main things that I’m looking at is the nature and quality of their feedback loops.
When I think about how, in principle, one would train good intuitions about what other people are feeling at any given moment, I reason “well, I would need to be able to make predictions about that, and get immediate, reliable feedback about if my predictions are correct.” This doesn’t seem that far off from what Circling is. (For instance, “I have a story that you’re feeling defensive” → “I don’t feel defensive, so much as righteous. And...There’s a flowering of heat in my belly.”)
Circling does not seem like a perfect training regime, to my naive sensors, but if I imagine a person engaging in Circling for 5000 hours, or more, it seems pretty plausible that they would get increasingly skilled along a particular axis.
This makes it seem worthwhile training with masters in that domain, to see what skills they bring to bear. And I might find out that some parts of the practice which seemed off the mark from my naive projection of how I would design a training environment, are actually features, not bugs.
This is in contrast to say, “energy healing”. Most forms of energy healing do not have the kind of feedback loop that would lead to a person acquiring skill along a particular axis, and so I would expect them to be “pure woo.”
For that matter, I think a lot of “Authentic Relating” seems like a much worse training regime than Circling, for a number of reasons, including that AR (ironically), seems to more often incentivizes people to share warm and nice-sounding, but less-than true sentiments than Circling.
When I think about how, in principle, one would train good intuitions about what other people are feeling at any given moment, I reason “well, I would need to be able to make predictions about that, and get immediate, reliable feedback about if my predictions are correct.” This doesn’t seem that far off from what Circling is. (For instance, “I have a story that you’re feeling defensive” → “I don’t feel defensive, so much as righteous. And...There’s a flowering of heat in my belly.”)
Why would you expect this feedback to be reliable…? It seems to me that the opposite would be the case.
(This is aside from the fact that even if the feedback were reliable, the most you could expect to be training is your ability to determine what someone is feeling in the specific context of a Circling, or Circling-esque, exercise. I would not expect that this ability—even were it trainable in such a manner—would transfer to other situations.)
Finally, and speaking of feedback loops, note that my question had two parts—and the second part (asking for relevant examples of these purported experts’ output) is one which you did not address. Relatedly, you said:
This makes it seem worthwhile training with masters in that domain, to see what skills they bring to bear.
But are they masters?
Note the structure of your argument (which structure I have seen repeated quite a few times, in discussions of this and related topics, including in other sub-threads on this post). It goes like this:
There is a process P which purports to output X.
On the basis of various considerations, I expect that process P does indeed output X, and indeed that process P is very good at outputting X.
…
I now conclude that process P does output X, and does so quite well.
Having thus concluded, I will now adopt process P (since I want X).
But there’s a step missing, you see. Step 3 should be:
Let me actually check to see whether process P in fact does output X (and how well it does so).
So, in this case, you have marshaled certain considerations—
When I think about how, in principle, one would train good intuitions … I reason … if I imagine a person engaging in Circling for 5000 hours, or more, it seems pretty plausible that …
—and on the basis of this thinking, reasoning, imagining, and seeming, have concluded, apparently, that people who’ve done a lot of Circling are “masters” in the domain of having “good intuitions about what other people are feeling at any given moment”.
I’m going to make a general point first, and then respond to some of your specific objections.
General point:
One of the things that I do, and that CFAR does, is trawl through the existing bodies of knowledge (or purported existing bodies of knowledge), that are relevant to problems that we care about.
But there’s a lot of that in the world, and most of it is not very reliable. My response is only point at a heuristic that I use in assessing those bodies of knowledge, and weighing which ones to prioritize and engage with further. I agree that this heuristic on its own is insufficient for certifying a tradition or a body of knowledge as correct, or reliable, or anything.
And yes, you need to do further evaluation work before adopting a procedure. In general, I would recommend against adopting a new procedure as a habit, unless it is concretely and obviously providing value. (There are obviously some exceptions to this general rule.)
Specific points:
Why would you expect this feedback to be reliable…? It seems to me that the opposite would be the case.
On the face of it, I wouldn’t assume that it is reliable, but I don’t have that strong a reason to assume that it isn’t a priori.
Post priori, my experience being in Circles is that there is sometime incentive to obscure what’s happening for you, in a circle, but that, at least with skilled facilitation, there is usually enough trust in the process that that doesn’t happen. This is helped by the fact that there are many degrees of freedom in terms of one’s response: I might say, “I don’t want to share what’s happening for me” or “I notice that I don’t want to engage with that.”
I could be typical minding, but I don’t expect most people to lie outright in this context.
(This is aside from the fact that even if the feedback were reliable, the most you could expect to be training is your ability to determine what someone is feeling in the specific context of a Circling, or Circling-esque, exercise. I would not expect that this ability—even were it trainable in such a manner—would transfer to other situations.)
That seems like a reasonable hypothesis.
Not sure if it’s a crux, in so far as if something works well in circling, you can intentionally import the circling context. That is, if you find that you can in fact transfer intuitions, process fears, track what’s motivating a person, etc., effectively in the circling context, an obvious next step might be to try and and do this on topics that you care about, in the circling context. e.g. Circles on X-risk.
In practice it seems to be a little bit of both: I’ve observed people build skills in circling, that they apply in other contexts, and also their other contexts do become more circling-y.
Finally, and speaking of feedback loops, note that my question had two parts—and the second part (asking for relevant examples of these purported experts’ output) is one which you did not address.
Sorry, I wasn’t really trying to give a full response to your question, just dropping in with a little “here’s how I do things.”
You’re referring to this question?
What are some examples of their output, that is relevant to … research intuitions? (Or anything related?)
I expect there’s some talking past eachother going on, because this question seems surprising to me.
Um. I don’t think there are examples of their output with regard to research or research intuitions. The Circlers aren’t trying to do that, even a little. They’re a funny subculture that engages a lot with an interpersonal practice, with the goals of fuller understanding of self and deeper connections with others (roughly. I’m not sure that they would agree that those are the goals.)
But they do pass some of my heuristic checks for “something interesting might be happening here.” So I might go investigate and see what skill there is over in there, and how I might be able to re-purpose that skill for other goals that I care about.
Sort of like (I don’t know) if I was a biologist in an alternative world, and I had an inkling that I could do population simulations on a computer, but I don’t know anything about computers. So I go look around and I see who does seem to know about computers. And I find a bunch of hobbyists who are playing with circuits and making very simple video games, and have never had a thought about biology in their lives. I might hang out with these hobbyist and learn about circuits and making simple computer games, so that I can learn skills for making population simulations.
This analogy doesn’t quite hold up, because its easier to verify that the hobbyists are actually successfully making computer games, and to verify that their understanding of circuits reflects standard physics. The case of the Circlers is less clean cut, because it is less obvious that they are doing anything real, and because their own models of what they are doing and how are a lot less grounded.
But I think the basic relationship holds up, noting that figuring out which groups of hobbyists are doing real things is much trickier.
Maybe to say it clearly: I don’t think it is obvious, or a slam dunk, or definitely the case (and if you don’t think so then you must be stupid or misinformed) that “Circling is doing something real.” But also, I have heuristics that suggest that Circling is more interesting than a lot of woo.
In terms of evidence that make me think Circling is interesting (which again, I don’t expect to be compelling to everyone):
Having decent feedback loops.
Social evidence: A lot of people around me, including Anna, think it is really good.
Something like “universality”. (This is hand-wavy) Circling is about “what’s true”, and has enough reach to express or to absorb any way of being or any way the world might be. This is in contrast to many forms of woo, which have an ideology baked into them that reject ways the world could be a priori, for instance that “everything happens for a reason”. (This is not to say that Circling doesn’t have an ideology, or a metaphysics, but it is capable of holding more than just that ideology.)
Circling is concerned with truth, and getting to the truth. It doesn’t reject what’s actually happening in favor of a nicer story.
I can point to places where some people seem much more socially skilled, in ways that relate to circling skill.
Pete is supposedly able to be good at detecting lying.
The thing I said about picking out people who “seemed to be doing something”, and turned out to be circlers.
Somehow people do seem to cut past their own bullshit in circles, in a way that seems relevant to human rationality.
I’ve personally had some (few) meaningful realizations in Circles
I think all of the above are much weaker evidence than...
“I did x procedure, and got y, large, externally verifiable result”,
or even,
“I did v procedure, and got u, specific, good (but hard to verify externally) result.”
These days, I generally tend to stick to doing things that are concretely and fairly obviously (if only to me) having good immediate effects. If there aren’t pretty immediate obvious, effects, then I won’t bother much with it. And I don’t think circling passes that bar (for me at least). But I do think there are plenty of reasons to be interested in circling, for someone who isn’t following that heuristic strongly.
I also want to say, while I’m giving a sort-of-defense of being interested in circling, that I’m, personally, only a little interested.
I’ve done some ~1000 hours of Circling retreats, for personal reasons rather than research reasons (though admittedly the two are often entangled). I think I learned a few skills, which I could have learned faster, if I knew what I was aiming for. My ability to connect / be present with (some) others, improved a lot. I think I also damaged something psychologically, which took 6 months to repair.
Overall, I concluded it was fine, but I would have done better to train more specific and goal-directed skills like NVC. Personally, I’m more interested in other topics, and other sources of knowledge.
Reading other materials from the Focusing institute, etc.
Ego and what to do about it
Byron Katie’s The Work (I’m familiar with this from years ago, it has an epistemic core (one key question is “Is this true?”), and PJ EBY mentioned using this process with clients.)
I might check out Eckhart Tolle’s work again (which I read as a teenager)
Learning
Mostly iteration as I learn things on the object level, right now, but I’ve read a lot on deliberate practice, and study methodology, as well as learned general learning methods from mentors, in the past.
Talking with Brienne.
Part of this project will probably include a lit review on spacing effects and consolidation.
General rationality and stuff:
reading Artificial Intelligence: a Modern Approach
reading David Deutsch’s the Beginning of Infinity
rereading IQ and Human Intelligence
The Act of Creation
Old Micheal Vassar talks on youtube
Thinking about the different kinds of knowledge creation, and how rigorous argument (mathematical proof, engineering schematics) work.
I mostly read a lot of stuff, without a strong expectation that it will be right.
I think I also damaged something psychologically, which took 6 months to repair.
I’ve been pretty curious about the extent to which circling has harmful side effects for some people. If you felt like sharing what this was, the mechanism that caused it, and/or how it could be avoided I’d be interested.
I expect, though, that this is too sensitive/personal so please feel free to ignore.
It’s not sensitive so much has context-heavy, and I don’t think I can easily go into it in brief. I do think it would be good if we had a way to propagate different people’s experiences of things like Circling better.
Oh and as a side note, I have twice in my life had a short introductory conversation with a person, noticed that something unusual or interesting was happening (but not having any idea what), and then finding out subsequently that the person I was talking with had done a lot of circling.
The first person was Pete, who I had a conversation with shortly after EAG 2015, before he came to work for CFAR. The other was an HR person at a tech company that I was cajoled into interviewing at, despite not really having any relevant skills.
I would be hard pressed to say exactly what was interesting about those conversations: something like “the way they were asking questions was...something. Probing? Intentional? Alive?” Those words really don’t capture it, but whatever was happening I had a detector that pinged “something about this situation unusual.”
Said I appreciate your point that I used the term “extrospection” in a non-standard way—I think you’re right. The way I’ve heard it used, which is probably idiosyncratic local jargon, is to reference the theory of mind analog of introspection: “feeling, yourself, something of what the person you’re talking with is feeling.” You obviously can’t do this perfectly, but I think many people find that e.g. it’s easier to gain information about why someone is sad, and about how it feels for them to be currently experiencing this sadness, if you use empathy/theory of mind/the thing I think people are often gesturing at when they talk about “mirror neurons,” to try to emulate their sadness in your own brain. To feel a bit of it, albeit an imperfect approximation of it, yourself.
Similarly, I think it’s often easier for one to gain information about why e.g. someone feels excited about pursuing a particular line of inquiry, if one tries to emulate their excitement in one’s own brain. Personally, I’ve found this empathy/emulation skill quite helpful for research collaboration, because it makes it easier to trade information about people’s vague, sub-verbal curiosities and intuitions about e.g. “which questions are most worth asking.”
Circlers don’t generally use this skill for research. But it is the primary skill, I think, that circling is designed to train, and my impression is that many circlers have become relatively excellent at it as a result.
… something like the theory of mind analog of introspection: something like “feeling, yourself, something of what the person you’re talking with is feeling.” You obviously can’t do this perfectly, but I think many people find that e.g. it’s easier to gain information about why someone is sad, and about how it feels for them to be currently experiencing this sadness, if you use empathy/theory of mind/the thing I think people are often gesturing at when they talk about “mirror neurons,” to try to emulate their sadness in your own brain. To feel a bit of it, albeit an imperfect approximation of it, yourself.
Hmm. I see, thanks.
Now, you say “You obviously can’t do this perfectly”, but it seems to me a dubious proposition even to suggest that anyone (to a first approximation) can do this at all. Even introspection is famously unreliable; the impression I have is that many people think that they can do the thing that you call ‘extrospection’[1], but in fact they can do no such thing, and are deluding themselves. Perhaps there are exceptions—but however uncommon you might intuitively think such exceptions are, they are (it seems to me) probably a couple of orders of magnitude less common than that.
Similarly, I think it’s often easier for one to gain information about why e.g. someone feels excited about pursuing a particular line of inquiry, if one tries to emulate their excitement in one’s own brain. Personally, I’ve found this empathy/emulation skill quite helpful for research collaboration, because it makes it easier to trade information about people’s vague, sub-verbal curiosities and intuitions about e.g. “which questions are most worth asking.”
Do you have any data (other than personal impressions, etc.) that would show or even suggest that this has any practical effect? (Perhaps, examples / case studies?)
Thanks for spelling this out. My guess is that there are some semi-deep cruxes here, and that they would take more time to resolve than I have available to allocate at the moment. If Eli someday writes that post about the Nisbett and Wilson paper, that might be a good time to dive in further.
To do good UX you need to understand the mental models that your users have of your software. You can do that by doing a bunch of explicit A/B tests or you can do that by doing skilled user interviews.
A person who doesn’t do skilled user interviews will project a lot of their own mental models of how the software is supposed to work on the users that might have other mental models.
There are a lot of things about how humans relate to the world around them, that they normally don’t share with other people. People with a decent amount of self-awareness know how they reason, but they don’t know how other people reason at the same level.
Circling is about creating an environment where things can be shared that normally aren’t. While it would be theoretically possible that people lie, it feels good to share about one’s intimate experience in a safe environment and be understood.
At one LWCW where I lead two circles there was a person who was in both and who afterwards said “I thought I was the only person who does X in two cases where I now know that other people also do X”.
My main claim is that the activity of doing user interviews is very similar to the experience of doing Circling.
As far as the claim goes of getting better at UX design: UX of things were mental habits matter a lot. It’s not as relevant to where you place your buttons but it’s very relevant to designing mental intervention in the style that CFAR does.
Evidence is great, but we have little controlled studies of Circling.
My main claim is that the activity of doing user interviews is very similar to the experience of doing Circling.
This is not an interesting claim. Ok, it’s ‘very similar’. And what of it? What follows from this similarity? What can we expect to be the case, given this? Does skill at Circling transfer to skill at conducting user interviews? How, precisely? What specific things do you expect we will observe?
Evidence is great, but we have little controlled studies of Circling.
So… we don’t have any evidence for any of these claims, in other words?
As far as the claim goes of getting better at UX design: UX of things were mental habits matter a lot. It’s not as relevant to where you place your buttons but it’s very relevant to designing mental intervention in the style that CFAR does.
I don’t think I quite understand what you’re saying, here (perhaps due to a typo or two). What does the term ‘UX’ even mean, as you are using it? What does “designing mental intervention” have to do with UX?
Not a CFAR staff member, but particularly interested in this comment.
It’s entirely unclear to me what this means, or why it is necessary / desirable.
One way to frame this would be getting really good at learning tacit knowledge.
Is there some reason to consider the folks who purvey (as you say) “woo-laden authentic relating games” to be ‘specialists’ here?
One way would be to interact one with them, notice “hey, this person is really good at this” and then inquire as to how they got so good. This is my experience with seasoned authentic relaters.
Another way would be to realize there’s a hole in understanding related to intuitions, and then start ssearching around for “people who are claiming to be really good at understanding others’ intuitions”, this might lead you to running into someone as described above and then seeing if they are indeed good at the thing.
But I don’t at all see what the case for the usefulness of ‘circling’ and similar woo might be.
Let’s say that as a designer, you wanted to impart your intuition of what makes good design. Would you rather have:
1. A newbie designer who has spent hundreds of hours of deliberate practice understanding and being able to transfer models of how someone is feeling/relating to different concepts, and being able to model them in their own mind.
2. A newbie designer who hasn’t done that.
To me, that’s the obvious use case for circling. I think there’s also a bunch of obvious benefits on a group level to being able to relate to people better as well.
One way to frame this would be getting really good at learning tacit knowledge.
Is there some reason to believe that being good at “simulating the felt senses of their conversational partners in their own minds” (whatever this means—still unclear to me) leads to being “really good at learning tacit knowledge”? In fact, is there any reason to believe that being “really good at learning tacit knowledge” is a thing?
One way would be to interact one with them, notice “hey, this person is really good at this” and then inquire as to how they got so good. This is my experience with seasoned authentic relaters.
Hmm, so in your experience, “seasoned authentic relaters” are really good at “simulating the felt senses of their conversational partners in their own minds”—is that right? If so, then the followup question is: is there some way for me to come into possession of evidence of this claim’s truth, without personally interacting with many (or any) “seasoned authentic relaters”?
Another way would be to realize there’s a hole in understanding related to intuitions
Can you say more about how you came to realize this?
Let’s say that as a designer, you wanted to impart your intuition of what makes good design. Would you rather have:
A newbie designer who has spent hundreds of hours of deliberate practice understanding and being able to transfer models of how someone is feeling/relating to different concepts, and being able to model them in their own mind.
A newbie designer who hasn’t done that.
Well, my first step would be to stop wanting that, because it is not a sensible (or, perhaps, even coherent) thing to want.
However, supposing that I nevertheless persisted in wanting to “impart my intuition”, I would definitely rather have #2 than #1. I would expect that having done what you describe in #1 would hinder, rather than help, the accomplishment of this sort of goal.
Is there some reason to believe that being good at “simulating the felt senses of their conversational partners in their own minds” (whatever this means—still unclear to me) leads to being “really good at learning tacit knowledge”?
This requires some model of how intuitions work. One model I like to use is to think about “intuition” is like a felt sense or aesthetic that relates to hundreds of little associations you’re picking up from a particular situation.
If i’m quickly able to in my mind, get a sense for what it feels like for you (i.e get that same felt sense or aesthetic feel when looking at what you’re looking at), and use circling like tools to be able to tease out which parts of the environment most contribute to that aesthetic feel, I can quickly create similar associations in my own mind and thus develop similar intuitions.
f so, then the followup question is: is there some way for me to come into possession of evidence of this claim’s truth, without personally interacting with many (or any) “seasoned authentic relaters”?
Possibly you could update by hearing many other people who have interacted with seasoned authentic relaters stating they believe this to be the case.
Can you say more about how you came to realize this?
I mean, to me this was just obvious seeing for instance how little emphasis the rationalists I interact with emphasize things like deliberate practice relative to things like conversation and explicit thinking. I’m not sure how CFAR recognized it.
However, supposing that I nevertheless persisted in wanting to “impart my intuition”, I would definitely rather have #2 than #1. I would expect that having done what you describe in #1 would hinder, rather than help, the accomplishment of this sort of goal.
I think this is a coherent stance if you think the general “learning intuitions” skill is impossible. But imagine if it weren’t, would you agree that training it would be useful?
This requires some model of how intuitions work. One model I like to use is […]
Hmm. It’s possible that I don’t understand what you mean by “felt sense”. Do you have a link to any discussion of this term / concept?
That aside, the model you have sketched seems implausible to me; but, more to the point, I wonder what rent it pays? Perhaps it might predict, for example, that certain people might be really good at learning tacit knowledge, etc.; but then the obvious question becomes: fair enough, and how do we test these predictions?
In other words, “my model of intuitions predicts X” is not a sufficient reason to believe X, unless those predictions have been borne out somehow, or the model validated empirically, or both. As always, some examples would be useful.
Possibly you could update by hearing many other people who have interacted with seasoned authentic relaters stating they believe this to be the case.
It is not clear to me whether this would be evidence (in the strict Bayesian sense); is it more likely that the people from whom I have heard such things would make these claims if they were true than otherwise? I am genuinely unsure, but even if the answer is yes, the odds ratio is low; if evidence, it’s a very weak form thereof.
Conversely, if this sort of thing is the only form of evidence put forth, then that itself is evidence against, as it were!
I mean, to me this was just obvious seeing for instance how little emphasis the rationalists I interact with emphasize things like deliberate practice relative to things like conversation and explicit thinking. I’m not sure how CFAR recognized it.
Hmm, I am inclined to agree with your observation re: deliberate practice. It does seem puzzling to me that the solution to the (reasonable) view “intuition is undervalued, and as a consequence deliberate practice is under-emphasized” would be “let’s try to understand intuition, via circling etc.” rather than “let’s develop intuitions, via deliberate practice, whereupon the results will speak for themselves, and this will also lead to improved understanding”. (Corollary question: have the efforts made toward understanding intuitions yielded an improved emphasis on deliberate practice, and have the results thereof been positive and obvious?)
I think this is a coherent stance if you think the general “learning intuitions” skill is impossible. But imagine if it weren’t, would you agree that training it would be useful?
Indeed, I would, but notice that what you’re asking is different than what you asked before.
In your earlier comment, you asked whether I would find it useful (in the hypothetical “newbie designer” situation) to be dealing with someone who had undertaken a lot of “deliberate practice understanding and being able to transfer models of how someone is feeling/relating to different concepts, and being able to model them in their own mind”.
Now, you are asking whether I judge “training … the general ‘learning intuitions’ skill” to be useful.
Your questions imply that these are the same thing. But (even in the hypothetical case where there is such a thing as the latter) they are not!
The wikipedia article for Gendlin’s focusing has a section trying to describe felt sense, taking out the specific part about “the body”, the first part says:
” Gendlin gave the name “felt sense” to the unclear, pre-verbal sense of “something”—the inner knowledge or awareness that has not been consciously thought or verbalized”,
Which is fairly close to my use of it here.
That aside, the model you have sketched seems implausible to me; but, more to the point, I wonder what rent it pays? Perhaps it might predict, for example, that certain people might be really good at learning tacit knowledge, etc.; but then the obvious question becomes: fair enough, and how do we test these predictions?
One thing it might predict is that there are ways to train the transfer of intuition, from both the teaching and learning side of things, and that by teaching them people get better at picking up intuitions.
Hmm, I am inclined to agree with your observation re: deliberate practice. It does seem puzzling to me that the solution to the (reasonable) view “intuition is undervalued, and as a consequence deliberate practice is under-emphasized”
I do believe CFAR at one point was teaching deliberate practice and calling it “turbocharged training”. However, if one is really interested in intiution and thinks its’ useful, the next obvious step is to ask “ok, I have this blunt instrument for teaching intuition called deliberate practice, can we use an understanding of how intuitions work to improve upon it?”
Your questions imply that these are the same thing. But (even in the hypothetical case where there is such a thing as the latter) they are not!
Good catch, this assumes that my simplified model of how intuitions work is at least partly correct. If the felt sense you get from a particular situation doesn’t relate to intuition, or if its’ impossible for one human being to get better at feeling what another is feeling, than these are not equivalent. I happen to think both are true.
One thing it might predict is that there are ways to train the transfer of intuition, from both the teaching and learning side of things, and that by teaching them people get better at picking up intuitions.
Well, my question stands. That is a prediction, sure (if a vague one), but now how do we test it? What concrete observations would we expect, and which are excluded, etc.? What has actually been observed? I’m talking specifics, now; data or case studies—but in any case very concrete evidence, not generalities!
I do believe CFAR at one point was teaching deliberate practice and calling it “turbocharged training”. However, if one is really interested in intiution and thinks its’ useful, the next obvious step is to ask “ok, I have this blunt instrument for teaching intuition called deliberate practice, can we use an understanding of how intuitions work to improve upon it?”
Yes… perhaps this is true. Yet in this case, we would expect to continue to use the available instruments (however blunt they may be) until such time as sharper tools are (a) available, and (b) have been firmly established as being more effective than the blunt ones. But it seems to me like neither (a) (if I’m reading your “at one point” comment correctly), nor (b), is the case here?
Really, what I don’t think I’ve seen, in this discussion, is any of what I, in a previous comment, referred to as “the cake”. This continues to trouble me!
I suspect the CFARians have more delicious cake for you, as I haven’t put that much time into circling, and the related connection skills I worked on more than a decade ago and have atrophied since.
Things I remember:
much quicker connection with people
there was a few things like exercise that I wasn’t passionate about but wanted to be. After talking with people who were passionate I was able to become passionate myself for those things
I was able to more quickly learn social cognitive strategies by interacting with others who had them.
To suggest something more concrete… would you predict that if an X-ist wanted to pass a Y-ist’s ITT, they would have more success if the two of them sat down to circle beforehand? Relative to doing nothing, and/or relative to other possible interventions like discussing X vs Y? For values of X and Y like Democrat/Republican, yay-SJ/boo-SJ, cat person/dog person, MIRI’s approach to AI/Paul Christiano’s approach?
It seems to me that (roughly speaking) if circling was more successful than other interventions, or successful on a wider range of topics, that would validate its utility. Said, do you agree?
I always think of ‘felt sense’ as, not just pre-verbal intuitions, but intuitions associated with physical sensations, be they in my head, shoulders, stomach, etc.
Yeah, same; I think this term has experienced some semantic drift, which is confusing. I meant to refer to pre-verbal intuitions in general, not just ones accompanied by physical sensation.
Yes, I think there’s a distinction between the semantic content of “My intuition is that Design A is better than Design B” referring to the semantic content or how the intuition “caches out” in terms of decisions. This contrast with the felt sense, which always seems to refer to what the intuition is like “from the inside,” for example a sense of unease when looking at Design A, and rightness when looking at Design B.
I feel like using the word “intuition” can refer to both the latter and the former, whereas when I say “felt sense” it always refers to the latter.
I was preparing to write a reply to the effect of “this is the most useful comment about what CFAR is doing and why that’s been posted on this thread yet” (it might still be, even)—but then I got to the part where your explanation takes a very odd sort of leap.
It’s entirely unclear to me what this means, or why it is necessary / desirable. (Also, it seems like you’re using the term ‘extrospection’ in a quite unusual way; a quick search turns up no hits for anything like the definition you just gave. What’s up with that?)
There… seems to be quite a substantial line of reasoning hidden here, but I can’t guess what it is. Could you elaborate?
Is there some reason to consider the folks who purvey (as you say) “woo-laden authentic relating games” to be ‘specialists’ here? What are some examples of their output, that is relevant to … research intuitions? (Or anything related?)
In short: I’m an engineer (my background is in computer science), and I’ve also studied philosophy. It’s clear enough to me why certain things that look like ‘philosophy’ can be, and are, useful in practice (despite agreeing with you that philosophy as a whole is, indeed, “an unusually unproductive field”). And we do, after all, have the Sequences, which is certainly philosophy if it’s anything (whatever else it may also be).
But I don’t at all see what the case for the usefulness of ‘circling’ and similar woo might be. Your comment makes me more worried, not less, on the whole; no doubt I am not alone. Perhaps elaborating a bit on my questions above might help shed some light on these matters.
I’m speaking for myself here, not any institutional view at CFAR.
When I’m looking at maybe-experts, woo-y or otherwise, one of the main things that I’m looking at is the nature and quality of their feedback loops.
When I think about how, in principle, one would train good intuitions about what other people are feeling at any given moment, I reason “well, I would need to be able to make predictions about that, and get immediate, reliable feedback about if my predictions are correct.” This doesn’t seem that far off from what Circling is. (For instance, “I have a story that you’re feeling defensive” → “I don’t feel defensive, so much as righteous. And...There’s a flowering of heat in my belly.”)
Circling does not seem like a perfect training regime, to my naive sensors, but if I imagine a person engaging in Circling for 5000 hours, or more, it seems pretty plausible that they would get increasingly skilled along a particular axis.
This makes it seem worthwhile training with masters in that domain, to see what skills they bring to bear. And I might find out that some parts of the practice which seemed off the mark from my naive projection of how I would design a training environment, are actually features, not bugs.
This is in contrast to say, “energy healing”. Most forms of energy healing do not have the kind of feedback loop that would lead to a person acquiring skill along a particular axis, and so I would expect them to be “pure woo.”
For that matter, I think a lot of “Authentic Relating” seems like a much worse training regime than Circling, for a number of reasons, including that AR (ironically), seems to more often incentivizes people to share warm and nice-sounding, but less-than true sentiments than Circling.
Why would you expect this feedback to be reliable…? It seems to me that the opposite would be the case.
(This is aside from the fact that even if the feedback were reliable, the most you could expect to be training is your ability to determine what someone is feeling in the specific context of a Circling, or Circling-esque, exercise. I would not expect that this ability—even were it trainable in such a manner—would transfer to other situations.)
Finally, and speaking of feedback loops, note that my question had two parts—and the second part (asking for relevant examples of these purported experts’ output) is one which you did not address. Relatedly, you said:
But are they masters?
Note the structure of your argument (which structure I have seen repeated quite a few times, in discussions of this and related topics, including in other sub-threads on this post). It goes like this:
There is a process P which purports to output X.
On the basis of various considerations, I expect that process P does indeed output X, and indeed that process P is very good at outputting X.
…
I now conclude that process P does output X, and does so quite well.
Having thus concluded, I will now adopt process P (since I want X).
But there’s a step missing, you see. Step 3 should be:
Let me actually check to see whether process P in fact does output X (and how well it does so).
So, in this case, you have marshaled certain considerations—
—and on the basis of this thinking, reasoning, imagining, and seeming, have concluded, apparently, that people who’ve done a lot of Circling are “masters” in the domain of having “good intuitions about what other people are feeling at any given moment”.
But… are they? Have you checked?
Where is the evidence?
I’m going to make a general point first, and then respond to some of your specific objections.
General point:
One of the things that I do, and that CFAR does, is trawl through the existing bodies of knowledge (or purported existing bodies of knowledge), that are relevant to problems that we care about.
But there’s a lot of that in the world, and most of it is not very reliable. My response is only point at a heuristic that I use in assessing those bodies of knowledge, and weighing which ones to prioritize and engage with further. I agree that this heuristic on its own is insufficient for certifying a tradition or a body of knowledge as correct, or reliable, or anything.
And yes, you need to do further evaluation work before adopting a procedure. In general, I would recommend against adopting a new procedure as a habit, unless it is concretely and obviously providing value. (There are obviously some exceptions to this general rule.)
Specific points:
On the face of it, I wouldn’t assume that it is reliable, but I don’t have that strong a reason to assume that it isn’t a priori.
Post priori, my experience being in Circles is that there is sometime incentive to obscure what’s happening for you, in a circle, but that, at least with skilled facilitation, there is usually enough trust in the process that that doesn’t happen. This is helped by the fact that there are many degrees of freedom in terms of one’s response: I might say, “I don’t want to share what’s happening for me” or “I notice that I don’t want to engage with that.”
I could be typical minding, but I don’t expect most people to lie outright in this context.
That seems like a reasonable hypothesis.
Not sure if it’s a crux, in so far as if something works well in circling, you can intentionally import the circling context. That is, if you find that you can in fact transfer intuitions, process fears, track what’s motivating a person, etc., effectively in the circling context, an obvious next step might be to try and and do this on topics that you care about, in the circling context. e.g. Circles on X-risk.
In practice it seems to be a little bit of both: I’ve observed people build skills in circling, that they apply in other contexts, and also their other contexts do become more circling-y.
Sorry, I wasn’t really trying to give a full response to your question, just dropping in with a little “here’s how I do things.”
You’re referring to this question?
I expect there’s some talking past eachother going on, because this question seems surprising to me.
Um. I don’t think there are examples of their output with regard to research or research intuitions. The Circlers aren’t trying to do that, even a little. They’re a funny subculture that engages a lot with an interpersonal practice, with the goals of fuller understanding of self and deeper connections with others (roughly. I’m not sure that they would agree that those are the goals.)
But they do pass some of my heuristic checks for “something interesting might be happening here.” So I might go investigate and see what skill there is over in there, and how I might be able to re-purpose that skill for other goals that I care about.
Sort of like (I don’t know) if I was a biologist in an alternative world, and I had an inkling that I could do population simulations on a computer, but I don’t know anything about computers. So I go look around and I see who does seem to know about computers. And I find a bunch of hobbyists who are playing with circuits and making very simple video games, and have never had a thought about biology in their lives. I might hang out with these hobbyist and learn about circuits and making simple computer games, so that I can learn skills for making population simulations.
This analogy doesn’t quite hold up, because its easier to verify that the hobbyists are actually successfully making computer games, and to verify that their understanding of circuits reflects standard physics. The case of the Circlers is less clean cut, because it is less obvious that they are doing anything real, and because their own models of what they are doing and how are a lot less grounded.
But I think the basic relationship holds up, noting that figuring out which groups of hobbyists are doing real things is much trickier.
Maybe to say it clearly: I don’t think it is obvious, or a slam dunk, or definitely the case (and if you don’t think so then you must be stupid or misinformed) that “Circling is doing something real.” But also, I have heuristics that suggest that Circling is more interesting than a lot of woo.
In terms of evidence that make me think Circling is interesting (which again, I don’t expect to be compelling to everyone):
Having decent feedback loops.
Social evidence: A lot of people around me, including Anna, think it is really good.
Something like “universality”. (This is hand-wavy) Circling is about “what’s true”, and has enough reach to express or to absorb any way of being or any way the world might be. This is in contrast to many forms of woo, which have an ideology baked into them that reject ways the world could be a priori, for instance that “everything happens for a reason”. (This is not to say that Circling doesn’t have an ideology, or a metaphysics, but it is capable of holding more than just that ideology.)
Circling is concerned with truth, and getting to the truth. It doesn’t reject what’s actually happening in favor of a nicer story.
I can point to places where some people seem much more socially skilled, in ways that relate to circling skill.
Pete is supposedly able to be good at detecting lying.
The thing I said about picking out people who “seemed to be doing something”, and turned out to be circlers.
Somehow people do seem to cut past their own bullshit in circles, in a way that seems relevant to human rationality.
I’ve personally had some (few) meaningful realizations in Circles
I think all of the above are much weaker evidence than...
“I did x procedure, and got y, large, externally verifiable result”,
or even,
“I did v procedure, and got u, specific, good (but hard to verify externally) result.”
These days, I generally tend to stick to doing things that are concretely and fairly obviously (if only to me) having good immediate effects. If there aren’t pretty immediate obvious, effects, then I won’t bother much with it. And I don’t think circling passes that bar (for me at least). But I do think there are plenty of reasons to be interested in circling, for someone who isn’t following that heuristic strongly.
I also want to say, while I’m giving a sort-of-defense of being interested in circling, that I’m, personally, only a little interested.
I’ve done some ~1000 hours of Circling retreats, for personal reasons rather than research reasons (though admittedly the two are often entangled). I think I learned a few skills, which I could have learned faster, if I knew what I was aiming for. My ability to connect / be present with (some) others, improved a lot. I think I also damaged something psychologically, which took 6 months to repair.
Overall, I concluded it was fine, but I would have done better to train more specific and goal-directed skills like NVC. Personally, I’m more interested in other topics, and other sources of knowledge.
Some sampling of things that I’m currently investigating / interested in (mostly not for CFAR), and sources that I’m using:
Power and propaganda
reading the Dictator’s Handbook and some of the authors’ other work.
reading Kissinger’s books
rereading Samo’s draft
some “evil literature” (an example of which is “things Brent wrote”)
thinking and writing
Disagreement resolution and conversational mediation
I’m currently looking into some NVC materials
lots and lots of experimentation and iteration
Focusing, articulation, and aversion processing
Mostly iteration with lots of notes.
Things like PJ EBY’s excellent ebook.
Reading other materials from the Focusing institute, etc.
Ego and what to do about it
Byron Katie’s The Work (I’m familiar with this from years ago, it has an epistemic core (one key question is “Is this true?”), and PJ EBY mentioned using this process with clients.)
I might check out Eckhart Tolle’s work again (which I read as a teenager)
Learning
Mostly iteration as I learn things on the object level, right now, but I’ve read a lot on deliberate practice, and study methodology, as well as learned general learning methods from mentors, in the past.
Talking with Brienne.
Part of this project will probably include a lit review on spacing effects and consolidation.
General rationality and stuff:
reading Artificial Intelligence: a Modern Approach
reading David Deutsch’s the Beginning of Infinity
rereading IQ and Human Intelligence
The Act of Creation
Old Micheal Vassar talks on youtube
Thinking about the different kinds of knowledge creation, and how rigorous argument (mathematical proof, engineering schematics) work.
I mostly read a lot of stuff, without a strong expectation that it will be right.
Thanks for writing this up. Added a few things to my reading list and generally just found it inspiring.
FYI—this link goes to an empty shopping cart. Which of his books did you mean to refer to?
The best links I could find quickly were:
You, Version 2.0.
A Minute to Unlimit You
A Minute to Unlimit You
I’ve been pretty curious about the extent to which circling has harmful side effects for some people. If you felt like sharing what this was, the mechanism that caused it, and/or how it could be avoided I’d be interested.
I expect, though, that this is too sensitive/personal so please feel free to ignore.
It’s not sensitive so much has context-heavy, and I don’t think I can easily go into it in brief. I do think it would be good if we had a way to propagate different people’s experiences of things like Circling better.
Oh and as a side note, I have twice in my life had a short introductory conversation with a person, noticed that something unusual or interesting was happening (but not having any idea what), and then finding out subsequently that the person I was talking with had done a lot of circling.
The first person was Pete, who I had a conversation with shortly after EAG 2015, before he came to work for CFAR. The other was an HR person at a tech company that I was cajoled into interviewing at, despite not really having any relevant skills.
I would be hard pressed to say exactly what was interesting about those conversations: something like “the way they were asking questions was...something. Probing? Intentional? Alive?” Those words really don’t capture it, but whatever was happening I had a detector that pinged “something about this situation unusual.”
Coming back to this, I think I would describe it as “they seemed like they were actually paying attention”, which was so unusual as to be noteworthy.
Said I appreciate your point that I used the term “extrospection” in a non-standard way—I think you’re right. The way I’ve heard it used, which is probably idiosyncratic local jargon, is to reference the theory of mind analog of introspection: “feeling, yourself, something of what the person you’re talking with is feeling.” You obviously can’t do this perfectly, but I think many people find that e.g. it’s easier to gain information about why someone is sad, and about how it feels for them to be currently experiencing this sadness, if you use empathy/theory of mind/the thing I think people are often gesturing at when they talk about “mirror neurons,” to try to emulate their sadness in your own brain. To feel a bit of it, albeit an imperfect approximation of it, yourself.
Similarly, I think it’s often easier for one to gain information about why e.g. someone feels excited about pursuing a particular line of inquiry, if one tries to emulate their excitement in one’s own brain. Personally, I’ve found this empathy/emulation skill quite helpful for research collaboration, because it makes it easier to trade information about people’s vague, sub-verbal curiosities and intuitions about e.g. “which questions are most worth asking.”
Circlers don’t generally use this skill for research. But it is the primary skill, I think, that circling is designed to train, and my impression is that many circlers have become relatively excellent at it as a result.
Hmm. I see, thanks.
Now, you say “You obviously can’t do this perfectly”, but it seems to me a dubious proposition even to suggest that anyone (to a first approximation) can do this at all. Even introspection is famously unreliable; the impression I have is that many people think that they can do the thing that you call ‘extrospection’[1], but in fact they can do no such thing, and are deluding themselves. Perhaps there are exceptions—but however uncommon you might intuitively think such exceptions are, they are (it seems to me) probably a couple of orders of magnitude less common than that.
Do you have any data (other than personal impressions, etc.) that would show or even suggest that this has any practical effect? (Perhaps, examples / case studies?)
By the way, it seems to me like coming up with a new term for this would be useful, on account of the aforementioned namespace collision.
Thanks for spelling this out. My guess is that there are some semi-deep cruxes here, and that they would take more time to resolve than I have available to allocate at the moment. If Eli someday writes that post about the Nisbett and Wilson paper, that might be a good time to dive in further.
To do good UX you need to understand the mental models that your users have of your software. You can do that by doing a bunch of explicit A/B tests or you can do that by doing skilled user interviews.
A person who doesn’t do skilled user interviews will project a lot of their own mental models of how the software is supposed to work on the users that might have other mental models.
There are a lot of things about how humans relate to the world around them, that they normally don’t share with other people. People with a decent amount of self-awareness know how they reason, but they don’t know how other people reason at the same level.
Circling is about creating an environment where things can be shared that normally aren’t. While it would be theoretically possible that people lie, it feels good to share about one’s intimate experience in a safe environment and be understood.
At one LWCW where I lead two circles there was a person who was in both and who afterwards said “I thought I was the only person who does X in two cases where I now know that other people also do X”.
Do you claim that people who have experience with Circling, are better at UX design? I would like some evidence for this claim, if so.
My main claim is that the activity of doing user interviews is very similar to the experience of doing Circling.
As far as the claim goes of getting better at UX design: UX of things were mental habits matter a lot. It’s not as relevant to where you place your buttons but it’s very relevant to designing mental intervention in the style that CFAR does.
Evidence is great, but we have little controlled studies of Circling.
This is not an interesting claim. Ok, it’s ‘very similar’. And what of it? What follows from this similarity? What can we expect to be the case, given this? Does skill at Circling transfer to skill at conducting user interviews? How, precisely? What specific things do you expect we will observe?
So… we don’t have any evidence for any of these claims, in other words?
I don’t think I quite understand what you’re saying, here (perhaps due to a typo or two). What does the term ‘UX’ even mean, as you are using it? What does “designing mental intervention” have to do with UX?
Not a CFAR staff member, but particularly interested in this comment.
One way to frame this would be getting really good at learning tacit knowledge.
One way would be to interact one with them, notice “hey, this person is really good at this” and then inquire as to how they got so good. This is my experience with seasoned authentic relaters.
Another way would be to realize there’s a hole in understanding related to intuitions, and then start ssearching around for “people who are claiming to be really good at understanding others’ intuitions”, this might lead you to running into someone as described above and then seeing if they are indeed good at the thing.
Let’s say that as a designer, you wanted to impart your intuition of what makes good design. Would you rather have:
1. A newbie designer who has spent hundreds of hours of deliberate practice understanding and being able to transfer models of how someone is feeling/relating to different concepts, and being able to model them in their own mind.
2. A newbie designer who hasn’t done that.
To me, that’s the obvious use case for circling. I think there’s also a bunch of obvious benefits on a group level to being able to relate to people better as well.
Is there some reason to believe that being good at “simulating the felt senses of their conversational partners in their own minds” (whatever this means—still unclear to me) leads to being “really good at learning tacit knowledge”? In fact, is there any reason to believe that being “really good at learning tacit knowledge” is a thing?
Hmm, so in your experience, “seasoned authentic relaters” are really good at “simulating the felt senses of their conversational partners in their own minds”—is that right? If so, then the followup question is: is there some way for me to come into possession of evidence of this claim’s truth, without personally interacting with many (or any) “seasoned authentic relaters”?
Can you say more about how you came to realize this?
Well, my first step would be to stop wanting that, because it is not a sensible (or, perhaps, even coherent) thing to want.
However, supposing that I nevertheless persisted in wanting to “impart my intuition”, I would definitely rather have #2 than #1. I would expect that having done what you describe in #1 would hinder, rather than help, the accomplishment of this sort of goal.
This requires some model of how intuitions work. One model I like to use is to think about “intuition” is like a felt sense or aesthetic that relates to hundreds of little associations you’re picking up from a particular situation.
If i’m quickly able to in my mind, get a sense for what it feels like for you (i.e get that same felt sense or aesthetic feel when looking at what you’re looking at), and use circling like tools to be able to tease out which parts of the environment most contribute to that aesthetic feel, I can quickly create similar associations in my own mind and thus develop similar intuitions.
Possibly you could update by hearing many other people who have interacted with seasoned authentic relaters stating they believe this to be the case.
I mean, to me this was just obvious seeing for instance how little emphasis the rationalists I interact with emphasize things like deliberate practice relative to things like conversation and explicit thinking. I’m not sure how CFAR recognized it.
I think this is a coherent stance if you think the general “learning intuitions” skill is impossible. But imagine if it weren’t, would you agree that training it would be useful?
Hmm. It’s possible that I don’t understand what you mean by “felt sense”. Do you have a link to any discussion of this term / concept?
That aside, the model you have sketched seems implausible to me; but, more to the point, I wonder what rent it pays? Perhaps it might predict, for example, that certain people might be really good at learning tacit knowledge, etc.; but then the obvious question becomes: fair enough, and how do we test these predictions?
In other words, “my model of intuitions predicts X” is not a sufficient reason to believe X, unless those predictions have been borne out somehow, or the model validated empirically, or both. As always, some examples would be useful.
It is not clear to me whether this would be evidence (in the strict Bayesian sense); is it more likely that the people from whom I have heard such things would make these claims if they were true than otherwise? I am genuinely unsure, but even if the answer is yes, the odds ratio is low; if evidence, it’s a very weak form thereof.
Conversely, if this sort of thing is the only form of evidence put forth, then that itself is evidence against, as it were!
Hmm, I am inclined to agree with your observation re: deliberate practice. It does seem puzzling to me that the solution to the (reasonable) view “intuition is undervalued, and as a consequence deliberate practice is under-emphasized” would be “let’s try to understand intuition, via circling etc.” rather than “let’s develop intuitions, via deliberate practice, whereupon the results will speak for themselves, and this will also lead to improved understanding”. (Corollary question: have the efforts made toward understanding intuitions yielded an improved emphasis on deliberate practice, and have the results thereof been positive and obvious?)
Indeed, I would, but notice that what you’re asking is different than what you asked before.
In your earlier comment, you asked whether I would find it useful (in the hypothetical “newbie designer” situation) to be dealing with someone who had undertaken a lot of “deliberate practice understanding and being able to transfer models of how someone is feeling/relating to different concepts, and being able to model them in their own mind”.
Now, you are asking whether I judge “training … the general ‘learning intuitions’ skill” to be useful.
Your questions imply that these are the same thing. But (even in the hypothetical case where there is such a thing as the latter) they are not!
The wikipedia article for Gendlin’s focusing has a section trying to describe felt sense, taking out the specific part about “the body”, the first part says:
” Gendlin gave the name “felt sense” to the unclear, pre-verbal sense of “something”—the inner knowledge or awareness that has not been consciously thought or verbalized”,
Which is fairly close to my use of it here.
One thing it might predict is that there are ways to train the transfer of intuition, from both the teaching and learning side of things, and that by teaching them people get better at picking up intuitions.
I do believe CFAR at one point was teaching deliberate practice and calling it “turbocharged training”. However, if one is really interested in intiution and thinks its’ useful, the next obvious step is to ask “ok, I have this blunt instrument for teaching intuition called deliberate practice, can we use an understanding of how intuitions work to improve upon it?”
Good catch, this assumes that my simplified model of how intuitions work is at least partly correct. If the felt sense you get from a particular situation doesn’t relate to intuition, or if its’ impossible for one human being to get better at feeling what another is feeling, than these are not equivalent. I happen to think both are true.
I see, thanks.
Well, my question stands. That is a prediction, sure (if a vague one), but now how do we test it? What concrete observations would we expect, and which are excluded, etc.? What has actually been observed? I’m talking specifics, now; data or case studies—but in any case very concrete evidence, not generalities!
Yes… perhaps this is true. Yet in this case, we would expect to continue to use the available instruments (however blunt they may be) until such time as sharper tools are (a) available, and (b) have been firmly established as being more effective than the blunt ones. But it seems to me like neither (a) (if I’m reading your “at one point” comment correctly), nor (b), is the case here?
Really, what I don’t think I’ve seen, in this discussion, is any of what I, in a previous comment, referred to as “the cake”. This continues to trouble me!
I suspect the CFARians have more delicious cake for you, as I haven’t put that much time into circling, and the related connection skills I worked on more than a decade ago and have atrophied since.
Things I remember:
much quicker connection with people
there was a few things like exercise that I wasn’t passionate about but wanted to be. After talking with people who were passionate I was able to become passionate myself for those things
I was able to more quickly learn social cognitive strategies by interacting with others who had them.
To suggest something more concrete… would you predict that if an X-ist wanted to pass a Y-ist’s ITT, they would have more success if the two of them sat down to circle beforehand? Relative to doing nothing, and/or relative to other possible interventions like discussing X vs Y? For values of X and Y like Democrat/Republican, yay-SJ/boo-SJ, cat person/dog person, MIRI’s approach to AI/Paul Christiano’s approach?
It seems to me that (roughly speaking) if circling was more successful than other interventions, or successful on a wider range of topics, that would validate its utility. Said, do you agree?
Yes, although I expect the utility of circling over other methods to be dependent on the degree to which the ITT is based on intuitions.
I always think of ‘felt sense’ as, not just pre-verbal intuitions, but intuitions associated with physical sensations, be they in my head, shoulders, stomach, etc.
I think that Gendlin thinks all pre-verbal intuitions are represented with physical sensations.
I don’t agree with him but still use the felt-sense language in these parts because rationalists seem to know what I’m talking about.
Yeah, same; I think this term has experienced some semantic drift, which is confusing. I meant to refer to pre-verbal intuitions in general, not just ones accompanied by physical sensation.
Also in particular—felt sense refers to the qualia related to intuitions, rather than the intuitions themselves.
(Unsure, but I’m suspicious that the distinction between these two things might not be clear).
Yes, I think there’s a distinction between the semantic content of “My intuition is that Design A is better than Design B” referring to the semantic content or how the intuition “caches out” in terms of decisions. This contrast with the felt sense, which always seems to refer to what the intuition is like “from the inside,” for example a sense of unease when looking at Design A, and rightness when looking at Design B.
I feel like using the word “intuition” can refer to both the latter and the former, whereas when I say “felt sense” it always refers to the latter.