However, the essence of the rationalist project is that our intuitions are biased
Why do you trust your explicit intellectual reasoning any more than your intuitions?
Also, intuitions are good for individual, but, since intuitions are (almost by definition) very hard to communicate, they are not very useful for social coordination.
I don’t really understand what point you’re trying to argue for with this. Is the conclusion ”...and therefore we shouldn’t talk about them?” or ”...and therefore we shouldn’t use them?” or what?
I agree that if I go around making a lot of decisions based on my intuitions it will be harder to explain those decisions to other people. There are situations in which I want to optimize very hard for making decisions that are explicable in this way (e.g. if I’m a business manager), but there are situations where I don’t, and if I behave as if I’m always in my-decisions-need-to-be-explicable mode then I am missing opportunities to grasp a lot of power.
Why do you trust your explicit intellectual reasoning any more than your intuitions?
Firstly, there are cases where you can definitely trust your explicit reasoning more than your intuitions. For example, if I prove a mathematical theorem than I trust it more than just having an intuition that the theorem is true. Similarly, if I use physics to compute something about a physical phenomenon, I trust it more than just having an intuition about the physical phenomenon.
For most questions you can’t really compute the answer. You need to use some combination of intuition and explicit reasoning. However, this combination is indeed more trustworthy than intuition alone, since it allows treating at least some aspects of the question with precision. Moreover, if you manage to analyze your intuition and understand its source, your know much better to which extent you should trust it for the question at hand. Finally, it is the explicit reasoning part which allows you to offset the biases that you know your reasoning to have, at least until you trained your intuition to offset these biases automatically (assuming this is possible at all).
I don’t really understand what point you’re trying to argue for with this.
The point I’m trying to argue is, if someone wants to promote Looking to the community as a useful concept and practice, then they should prepare to argue in favor of it using explicit intellectual reasoning.
For example, if I prove a mathematical theorem than I trust it more than just having an intuition that the theorem is true. Similarly, if I use physics to compute something about a physical phenomenon, I trust it more than just having an intuition about the physical phenomenon.
I think the situation is much more complicated than this, at least for experts. Cf. Terence Tao’s description of the pre-rigorous, rigorous, and post-rigorous stages of mathematical development. Mathematical papers often have incorrect proofs of correct statements (and the proofs are often fixable), because mathematicians’ intuitions about mathematics are so well-developed that they lead them to correct conjectures even when attempts to write down proofs go awry because in a long proof there are many opportunities to make mistakes. My experience has definitely been that the longer a proof / computation gets the more I trust my intuitions if they happen to disagree. (But of course I trained my intuitions on many previous proofs / computations.)
For most questions you can’t really compute the answer. You need to use some combination of intuition and explicit reasoning. However, this combination is indeed more trustworthy than intuition alone, since it allows treating at least some aspects of the question with precision
Why do you believe this? Have you actually tried? As query says, in many situations (say, social skills), adding explicit reasoning to your intuitions can make you worse off, at least at first.
This is also not the position you started off with (not that I’m asking you for consistency, just noting that we started somewhere different than this and that’s how we got here). You asked:
If you don’t understand it on an intellectual level then how can you know whether it’s worth doing?
This seems to imply a fairly different cognitive algorithm than “combine your intuition and your explicit reasoning” (which, to be clear, is a thing I actually do, but probably a different way than you), namely “let your explicit reasoning veto anything it doesn’t think is worth doing.” Why do you think this is a good idea? In my experience this is an opportunity for various parts of you to find clever explicit arguments for not doing things that they’re trying to avoid for unrelated reasons (e.g. fear of social judgment).
Finally, it is the explicit reasoning part which allows you to offset the biases that you know your reasoning to have, at least until you trained your intuition to offset these biases automatically (assuming this is possible at all).
Again, why do you believe that this works?
The point I’m trying to argue is, if someone wants to promote Looking to the community as a useful concept and practice, then they should prepare to argue in favor of it using explicit intellectual reasoning.
Okay, but Kaj and Val have both been saying (and I agree) that doing this runs the risk of making it harder to actually communicate Looking itself. For now I am basically content to have people either decide that Looking is worth trying to understand and trying to understand it, or decide that it isn’t. But I get the sense that this would be unsatisfying to you in some way.
With explicit intellectual reasoning, there’s a chance for error correction. If someone’s initial reasoning is wrong, others can point it out or they can eventually realize it on their own with further reasoning, and it seems possible to make progress towards the truth over time this way. (See science, math, and philosophy.) I’m worried that if Looking is wrong on a some question and makes me unjustifiably certain about it as well as discount explicit reasoning about that question, I won’t be able to back out of that epistemic state.
I’m also worried that LW as a whole will get into such a state and not be able to back out of it, which makes me want to also discourage other people from trying Looking without first having an explicit understanding of its epistemic nature. I want to have answers to questions like:
How does Looking work (especially on questions that are not confined to the internals of one’s own mind)?
How confident should we be about the answers that Looking gives? (Do people tend to be overconfident about Looking and if so how should we correct for that both as individuals and as a community?)
If Looking gives systematically wrong answers to certain questions (i.e., most people get the same wrong answer via Looking), how will that eventually get corrected?
Okay, but Kaj and Val have both been saying (and I agree) that doing this runs the risk of making it harder to actually communicate Looking itself.
Here’s my prior:
P(hearing explicit reasoning about X makes it harder to learn to do X | X is a useful epistemic tool) is low
P(X claims that hearing explicit reasoning about X makes it harder to learn to do X | X is a memetic virus trying to evade my epistemic immune system) is high
So such a claim makes me update towards thinking that X is a memetic virus. This kind of reasoning unfortunately makes me less likely to be able to learn X in the (hopefully rare) situation where X was actually a useful epistemic tool, but I think that’s just a price I have to pay to maintain my epistemic hygiene?
I’m worried that if Looking is wrong on a some question and makes me unjustifiably certain about it as well as discount explicit reasoning about that question, I won’t be able to back out of that epistemic state.
There are lots of ways to find out that you’re wrong about something. Instead of doing explicit reasoning you can make predictions and run experiments. Looking doesn’t mean not being on board with beliefs having to pay rent.
Example: when I Look at people, I get a sense of what’s going on with them to cause them to behave in certain ways, and I can test that sense by using it to make predictions and running experiments to check them (e.g. asking them a certain kind of question to see their response), in addition to doing explicit reasoning and seeing if the explicit reasoning comes to similar conclusions. Looking is not meant to displace explicit reasoning, but it is a different tool than explicit reasoning, and sometimes I want to use one or the other or both.
Subexample: I met a guy recently at a circling workshop, and after he had said about 10 words I was highly confident, based on how I was reading his tone of voice and body language (which manifested as a feeling of distrust in my guts), and also based partly on his actual words, that he was doing a thing I would describe as “fake circling” (which I also used to do). My explicit reasoning agreed, especially once I learned more about his life circumstances (loosely, he was lonely in a way I expected to cause him to want to do “fake circling” as a way to feel connected to people).
I circled with him and told him I distrusted him twice, the second time when he was doing the fake circling thing, and the circle digested that for a bit. I didn’t tell him my hypothesis. Then he was in another circle that I wasn’t in where the circle independently revealed, and he agreed, that he was doing the thing I strongly suspected he was doing (but in a bit less detail than I had, I think). And he partially learned to stop, and my guts felt less distrust when he did. Then I told him my hypothesis in more detail and he agreed.
(If I were making predictions they would have been things like “I predict he’s not going to make any progress on the thing he came here to fix until he changes such that my guts stop distrusting him.” It’s tricky to score this prediction though.)
How does Looking work (especially on questions that are not confined to the internals of one’s own mind)?
What’s unsatisfying about Kaj’s original post above as an answer to this question?
How confident should we be about the answers that Looking gives? (Do people tend to be overconfident about Looking and if so how should we correct for that both as individuals and as a community?)
The framing of this question feels off to me. Looking is a source of data, not answers. What you do with that data is up to you, and you can apply as much explicit reasoning as you want once you even have access to the additional data at all.
If Looking gives systematically wrong answers to certain questions (i.e., most people get the same wrong answer via Looking), how will that eventually get corrected?
We make predictions and run experiments.
Something also feels off to me about the framing of this question. Looking is not a monolithic thing. People’s minds are different, and some people will be able to use it well and some people won’t. There are supplementary skills it’s useful to have in addition to just being able to Look (for example, precisely the sort of epistemic skills that LWers already have). The question feels a bit like asking about whether reading books gives systematically wrong answers to certain questions. Well, it depends on what books you’re reading and how good you are at interpreting the contents of what you read!
So such a claim makes me update towards thinking that X is a memetic virus. This kind of reasoning unfortunately makes me less likely to be able to learn X in the (hopefully rare) situation where X was actually a useful epistemic tool, but I think that’s just a price I have to pay to maintain my epistemic hygiene?
What’s unsatisfying about Kaj’s original post above as an answer to this question?
I think it’s a step in the right direction, but I’m not sure if his explanation is correct, or that different people are even talking about the same thing when they say “Looking”.
The framing of this question feels off to me. Looking is a source of data, not answers. What you do with that data is up to you, and you can apply as much explicit reasoning as you want once you even have access to the additional data at all.
Take this example of Looking:
The point of being able to Look at the LW epistemic game, from within the point of view of the LW epistemic game, is precisely to see the ways in which playing it well is Goodharting on truth-seeking.
I had interpreted this to mean that you were getting the answer of “playing it well is Goodharting on truth-seeking” directly out of Looking. If that’s not the case, can you explain what the data was, and how that lead you to the conclusion of “playing it well is Goodharting on truth-seeking”? (I think Goodharting is almost certainly true and unavoidable to some extent in any social situation, and it wouldn’t be too hard to find, via our normal observations, intuitions and explicit reasoning, specific forms of Goodharting on LW. What additional data does Looking provide?)
I had interpreted this to mean that you were getting the answer of “playing it well is Goodharting on truth-seeking” directly out of Looking. If that’s not the case, can you explain what the data was, and how that lead you to the conclusion of “playing it well is Goodharting on truth-seeking”? (I think Goodharting is almost certainly true and unavoidable to some extent in any social situation, and it wouldn’t be too hard to find, via our normal intuitions and explicit reasoning, specific forms of Goodharting on LW. What additional data does Looking provide?)
I don’t have a cached answer to this; Looking is preverbal, so I have to do a separate cognitive task of introspection to give a verbal answer to this question. (I’m also somewhat more confident than I was that I’m doing the thing that Kaj and Val call Looking but certainly not 100% confident. Maybe 90%.)
Okay, so here’s an analogy: when I was in 8th grade I read Atlas Shrugged, and it successfully invaded my mind and turned me into an objectivist for several months. I went around saying things like “gift-giving is immoral” (I also gave people gifts, and refused to notice this discrepancy) and feeling very smug. At some point it… wore off? And then I looked back on my behavior, and now that there wasn’t this “I said an objectivist thing which meant it was the best thing” thing getting in the way, I thought to myself, what the fuck have I been doing? Then I decided I was too incompetent to do philosophy and resolved to not try doing it again until I got more life experience or something.
The moment of objectivism wearing off is a bit like what it feels like to Look at the LW epistemic game. I’m seeing the same things I always saw, in some sense, but there’s a distorting thing that was getting in the way that’s gone now (according to me), which changes the frame I’m using to process and verbally label what I’m seeing. Those verbal labels, which I assign in a separate cognitive step that takes place after the Looking, are something like “oh, look, we’re a bunch of monkeys slinging words around while being terrified that some of the words will cause us to have false beliefs or something, whatever that even means, and meanwhile the set of monkeys most worried about this is essentially disjoint from the set of monkeys posting updates about what they’re actually doing in the world with their beliefs.”
Getting slightly closer to the data itself, I’ve been seeing examples of people making arguments that feel to me like motivated reasoning (this is not the Looking step, the Looking facilitates feeling this way but it is not the same thing as feeling this way) in a way that feels similar to when people give fake justifications for their behavior in circles, and when I introspect on the flavor of the motivated reasoning I get “optimizing for accepting arguments that are outside-view defensible instead of optimizing for truth-seeking.” This is again not Looking, but it’s a thought I’ve been having since reading Inadequate Equilibria and the Hero Licensing dialogue in particular.
Then I check all this against my explicit reasoning, which agrees that Goodharting is easy and the default outcome in situations like this. The obvious problem, according to my explicit reasoning, is that there’s no easy way to gain status on LW by being really right about things—for example, if a prediction market was explicitly a big and important part of LW culture—and instead the way you gain status is by getting other LWers to agree with you, or maybe writing impressively in a certain way, which is very different.
The point is less that I couldn’t have arrived at this conclusion without Looking, and more that without Looking, it may never have occurred to me to even try (because maybe some part of me is worried that if I Look at the LW epistemic game it will become harder for me to play it, so I might lose status on LW, or something like that).
Framing things in terms of data, hypotheses, and predictions is a strong concession to the LW epistemic game that I am explicitly choosing to make right now for the sake of ease of communication, and not everyone’s going to make that choice all the time.
There’s a thing that can happen after you Look that you might call a “flash of insight”; you suddenly realize something in a way that feels similar to the way proofs-by-picture can cause you to suddenly see the truth of a mathematical fact. Of course this is an opaque process and you’d be justified in not trusting it in yourself and others, but in the same way that you’d be justified in not trusting your intuitions or the intuitions of others generally. That’s not specific to Looking.
“Everyone has bodhicitta,” to the extent that I understand what that means, does seem to me to be a hypothesis with testable predictions, although those predictions are somewhat subtle. Val does describe a few things after your quote that can be interpreted as such predictions. It’s also something else that’s less of a belief and more of a particular way of orienting towards people, again as far as I understand it.
I’m still not sure what exactly was the data that you got from Looking. You said previously “What you do with that data is up to you, and you can apply as much explicit reasoning as you want once you even have access to the additional data at all.” In order to apply explicit reasoning to some data we have to verbally describe it or give it some sort of external encoding, right? If so, can you give a description or encoding of just the raw data (or the least processed data that you have access to) that you got from Looking?
Framing things in terms of data, hypotheses, and predictions is a strong concession to the LW epistemic game that I am explicitly choosing to make right now for the sake of ease of communication, and not everyone’s going to make that choice all the time.
What’s the proposed alternative to framing things this way, and how does one correct epistemic errors in that frame? For example if Val says “one clear thing I noticed when I first intentionally Looked is that everyone has bodhicitta” and I want to ask him about data and predictions, but he doesn’t want to use that frame, what should I do instead?
Val does describe a few things after your quote that can be interpreted as such predictions.
I’m not seeing anything that look like testable predictions. Can you spell them out?
In order to apply explicit reasoning to some data we have to verbally describe it or give it some sort of external encoding, right?
You can do less direct things, like having other nonverbal parts of your mind process the data, introspecting / Focusing to get some words out of those parts, and then doing explicit reasoning on the words.
If so, can you give a description or encoding of just the raw data (or the least processed data that you have access to) that you got from Looking?
I already tried to do that; the data gets processed into felt senses and I tried to give my Focusing labels for the felt senses. I probably didn’t do the best job but I don’t feel up to putting in the level of effort that feels like it would be necessary to do substantially better.
Here’s another analogy: if you’re face-blind, you’re getting the same raw sensory input from your eyes that everyone else is (up to variations between your eyes, whatever), but the part of most people’s minds explicitly dedicated to processing and recognizing faces is not active or at least is weak, so you can see a face and process it as “this face with this kind of eyes and this nose and this hair” where someone else would see the same face and process it as “Bob’s face.”
Looking is sort of like becoming less face-blind. (Only sort of, this is really not a great analogy.) And it’s unclear how one would go about communicating what’s different about your mind when this happens, other than “now it’s immediately clear to me that that’s Bob’s face, whereas before I would have had to use explicit reasoning to figure that out.”
What’s the proposed alternative to framing things this way, and how does one correct epistemic errors in that frame? For example if Val says “one clear thing I noticed when I first intentionally Looked is that everyone has bodhicitta” and I want to ask him about data and predictions, but he doesn’t want to use that frame, what should I do instead?
Meet him in person and ask him to show you the way in which everyone has bodhicitta. (Of course you are fully justified in finding this too expensive / risky to try.)
Edit: I misunderstood Wei Dai’s question; see below.
I don’t have a good verbal description of the alternative frame (nor do I have only one alternative frame), but the way you correct epistemic errors in it is to smash into the territory repeatedly.
(There’s an additional thing of just not worrying about epistemic errors as such very much. Tennis players don’t spend a lot of time asking themselves “but what if all of my beliefs about tennis are wrong tho?” because they just play a bunch of tennis and notice what works and what doesn’t instead, without ever explicitly thinking about their epistemics at all. This isn’t to say it might not benefit them to think about epistemics every once in awhile, but it’s not the mode they primarily operate in.)
I’m not seeing anything that look like testable predictions. Can you spell them out?
What about this does not look like a testable prediction to you:
Meet him in person and ask him to show you the way in which everyone has bodhicitta. (Of course you are fully justified in finding this too expensive / risky to try.)
In practice, doesn’t that just translate to “shut up and don’t question it”?
(There’s an additional thing of just not worrying about epistemic errors as such very much. Tennis players don’t spend a lot of time asking themselves “but what if all of my beliefs about tennis are wrong tho?” because they just play a bunch of tennis and notice what works and what doesn’t instead, without ever explicitly thinking about their epistemics at all. This isn’t to say it might not benefit them to think about epistemics every once in awhile, but it’s not the mode they primarily operate in.)
I guess it depends on what field you’re working in so perhaps part of the disagreement here is caused by us coming from different backgrounds. I think in fields with short strong feedback cycles like tennis and math, where epistemic errors aren’t very costly, you can afford to not worry about epistemic errors much and just depend on smashing into the territory for error correction. In other fields like computer security and philosophy, where feedback cycles are weak or long, worrying about epistemic errors is one of the only things keeping you sane.
In principle we could have different sets of norms for different subject areas on LW, and “shut up and don’t question it” (or perhaps more charitably, “shut up and just try it”) could be acceptable for certain areas but not others. If that ends up happening I definitely want social epistemology itself to be an area where we worry a lot about epistemic errors.
What about this does not look like a testable prediction to you:
I was asking about how epistemic errors caused by Looking can be corrected. I think in that context “prediction” has to literally mean prediction, of a future observation, and not something that’s already known like people building monuments to honor lost loved ones.
In practice, doesn’t that just translate to “shut up and don’t question it”?
This seems really uncharitable, by far the least charitable you’ve been in this conversation so far (where I’ve generally been 100% happy with your behavior on the meta level). I have not asked you to shut up and I have not asked you not to question anything. You asked a question about what things look like in an alternative frame and I gave an honest answer from that frame; I don’t like being punished for answering the question you asked in the way you requested I answer it.
Edit: The above was based on a misunderstanding of Wei Dai’s question about what he should do instead; see below.
Some things are just hard to transmit except in person, and there are plenty of totally unobjectionable examples of this phenomenon.
In other fields like computer security and philosophy, where feedback cycles are weak or long, worrying about epistemic errors is one of the only things keeping you sane.
Feedback cycles in circling are very short, although pretty noisy unless the facilitator is quite skilled. Feedback cycles in ordinary social interaction can also be very short, although even noisier.
I have not asked you to shut up and I have not asked you not to question anything.
To clarify, I wasn’t saying that you were doing either of those things. My point was that you seemed to be proposing an epistemic norm whose practical effect would be similar to people being allowed to say “shut up and don’t question it”, namely that it would make it very hard to question certain conclusions and correct potential errors. (Again, I don’t think you’re doing this now, just proposing it as something that should be acceptable.)
Some things are just hard to transmit except in person, and there are plenty of totally unobjectionable examples of this phenomenon.
Some examples please? I honestly can’t think of anything I know that can only be transmitted in person.
My point was that you seemed to be proposing an epistemic norm whose practical effect would be similar to people being allowed to say “shut up and don’t question it”, namely that it would make it very hard to question certain conclusions and correct potential errors.
I don’t know that I was proposing an epistemic norm. What I did was tell you what interaction with the territory you would need to have in order to be able to understand a thing, in the same way that if we lived in a village where nothing was colored red and you asked me “what would I have to do to understand the ineffable nature of redness?” I might say “go over to the next village and ask to see their red thing.”
Some examples please? I honestly can’t think of anything I know that can only be transmitted in person.
Playing basketball? Carpentry? Singing? Martial arts? There are plenty of physical skills you could try teaching online, you probably wouldn’t get very far trying to teach them via text, probably somewhat farther via video, but in-person instruction, especially because it allows for substantial interaction and short feedback cycles, is really hard to replace.
I am consistently surprised at how different my intuitions on this topic are from the people I’ve been disagreeing with here. My prior is pretty strongly that most interesting skills can only be taught to a high level of competence in person, and that appearances to the contrary have been skewed by the availability heuristic because of school, etc. This seems to me like a totally unobjectionable point and yet it keeps coming up, possibly as a crux even.
There seems to be a related thing about people consistently expecting inferential / experiential distances to be short, when again my prior is that there’s no reason to expect either of these things to be true most of the time. And a third related thing where people keep expecting skill at X to translate into skill at explaining X.
To be very, very clear about this: I am in fact not asking you to update strongly in favor of any of the claims I or others have made about Looking or related topics, because I in fact think not enough evidence has been produced for such strong updates, and that the strongest such evidence can really only be transmitted in person (or rather, that I currently lack the skill to produce satisfying evidence in any way other than in person). I view what I’ve been doing as proposing hypotheses that people can consider, experiment with, or reject in whatever way they want, and also defending the ability of other people to consider, experiment with, etc. these hypotheses without being labeled epistemically suspect.
I don’t know that I was proposing an epistemic norm.
In that case there was a misunderstanding somewhere. Here’s my understanding/summary of our course of conversation: I said that explicit reasoning is useful for error correction. You said we can apply explicit reasoning to the data generated by Looking, and also check predictions for error correction. I said people who talk about Looking don’t tend to talk in terms of data, hypothesis and prediction. You said they may not want to use that frame. I asked what I should ask about instead (meaning how else can I try to encourage error correction, since that was the reason for wanting to ask about data and prediction in the first place). You said “Meet him in person and ask him to show you the way in which everyone has bodhicitta.” I interpreted that as a proposed alternative (or addition) to the norm of asking for data and predictions when someone proposes a new idea.
I guess the misunderstanding happened when I asked you “what should I do instead?” and you interpreted that as asking how can I understand Looking and bodhicitta, but what I actually meant was how can I encourage error correction in case Val was wrong about everyone having bodhicitta, and he doesn’t want to use the frame of data, hypothesis and prediction. I think “Meet him in person and ask him to show you the way in which everyone has bodhicitta.” would not serve my purpose because 1) in most cases nobody would be willing to do that so most new ideas would go unchallenged and 2) it wouldn’t accomplish the goal of error correction if Looking causes most people to make the same errors.
Hopefully that clears up the misunderstanding, in which case do you want to try answering my question again?
I guess the misunderstanding happened when I asked you “what should I do instead?” and you interpreted that as asking how can I understand Looking and bodhicitta, but what I actually meant was how can I encourage error correction in case Val was wrong about everyone having bodhicitta, and he doesn’t want to use the frame of data, hypothesis and prediction.
Oh. Yes, that’s exactly what happened. Thanks for writing down that summary.
I don’t really have a good answer to this question (if I did, it would be “try to encourage Val to use the frame of data, hypothesis and prediction, just don’t expect him to do it all the time”) so I’ll just say some thoughts. In my version of the frame Val is using there’s something a bit screwy about thinking of “everyone has bodhicitta” as a belief / hypothesis that makes testable predictions. That’s not quite the data type of that assertion; it’s a data type imported over from the LW epistemic frame and it’s not entirely natural here.
Here’s a related example that might be easier to think about: consider the assertion “everyone wants to be loved.” Interpreted too literally, it’s easy to find counterexamples: some people will claim to be terrified of the idea of being loved (for example, because in their lives the people who love them, like their parents, have consistently hurt them), and other people will claim to not care one way or the other, and on some level they may even be right. But there’s a sense in which these are defensive adaptations built on top of an underlying desire to be loved, which is plausibly a human universal for sensible evo-psych reasons (if your tribe loves you they won’t kick you out, they’ll take care of you even if you stop contributing temporarily because of sickness or injury, etc). And there’s an additional sense in which thinking in terms of this evo-psych model, while helpful as a sanity check, misses the point, because it doesn’t really capture the internal experience of being a human who wants to be loved, and seeing that internal experience from the outside as another human.
So one way to orient is that “everyone wants to be loved” is partially a hypothesis that makes testable predictions, suitably interpreted, but it’s also a particular choice of orienting towards other humans: choosing to pay attention to the level at which people want to be loved, as opposed to the level at which people will make all sorts of claims about their desire to be loved.
A related way of orienting towards it is that it’s a Focusing label for a felt sense, which is much closer to the data type of “everyone has bodhicitta” as I understand it. Said another way, it’s poetry. That doesn’t mean it doesn’t have epistemic content—a Val who realizes that everyone has bodhicitta anticipates somewhat different behavior from his fellow humans than a Val who doesn’t—but it does mean the epistemic content may be difficult to verbally summarize.
I think the situation is much more complicated than this, at least for experts.
I agree that the situation is more complicated, I disagree that it is “much more complicated”. Yes, mathematicians rely on intuition to feel in the gaps in proofs and to seek out the errors in proofs. And yet, it is uncontroversial that having a proof should make you much more confident in a mathematical statement than just having an intuition. In reality, there is a spectrum that goes roughly “intuition that T is correct” → “informal argument for T” → “idea for how to prove T” → “sketch of a proof of T” → “unvetted proof of T” → “vetted, peer-reviewed proof of T” → “machine verifiable formal proof of T”.
Why do you believe this? Have you actually tried?
Have I actually tried what? As to why I believe this, I think I already gave an “explicit reasoning” argument: and, yes, my intuition and life experience confirm this although this is not something that I can transmit to you directly.
This seems to imply a fairly different cognitive algorithm than “combine your intuition and your explicit reasoning” (which, to be clear, is a thing I actually do, but probably a different way than you), namely “let your explicit reasoning veto anything it doesn’t think is worth doing.”
This is a wrong way to look at this. Intuition and explicit reasoning are not two separate judges that give two separate verdicts. Combining intuition and explicit reasoning doesn’t mean averaging the results. The way it works is, when your intuition and reasoning disagree, you should try to understand why. You should pit them against each other and let them fight it out, and in the end you have something that resembles a system of formal arguments with intuition answering some sub-queries, and your reasoning and intuition both endorse the result. This is what I mean by “understanding on an intellectual level”.
Okay, but Kaj and Val have both been saying (and I agree) that doing this runs the risk of making it harder to actually communicate Looking itself.
I don’t insist that you only use explicit reasoning. Feel free use metaphors, koans, poetry and whatnot. But you should also explain it with explicit reasoning.
For now I am basically content to have people either decide that Looking is worth trying to understand and trying to understand it, or decide that it isn’t. But I get the sense that this would be unsatisfying to you in some way.
Well, if you are saying “I don’t want to convince everyone or even the most” that’s your prerogative of course. I just feel that the point of this forum is trying to have discussions whose insights will percolate across the entire community. Also I am personally interested in understanding what Looking is about and I feel that the explanations given so far leave me somewhat confused (although this last attempt by Kaj was significant progress).
For most questions you can’t really compute the answer. You need to use some combination of intuition and explicit reasoning. However, this combination is indeed more trustworthy than intuition alone, since it allows treating at least some aspects of the question with precision.
I don’t think this is true; intuition + explicit reasoning may have more of a certain kind of inside view trust (if you model intuition as not having gears that can be trustable), but intuition alone can definitely develop more outside-view/reputational trust. Sometimes explicitly reasoning about the thing makes you clearly worse at it, and you can account for this over time.
Finally, it is the explicit reasoning part which allows you to offset the biases that you know your reasoning to have, at least until you trained your intuition to offset these biases automatically (assuming this is possible at all).
I also don’t think this is as clear cut as you’re making it sound; explicit reasoning is also subject to biases, and intuitions can be the things which offset biases. As a quick and dirty example, even if your explicit reasoning takes the form of mathematical proofs which are verifiable, you can have biases about 1. which ontologies you use as your models to write proofs about, 2. which things you focus on proving, and 3. which proofs you decide to give. You can also have intuitions which push to correct some of these biases. It is not the case that intuition → biased, explicit reasoning → unbiased.
Explicit reflection is indeed a powerful tool, but I think there’s a tendency to confuse legibility with ability; someone can illegibly to others or themselves have the capacity to do something (like use an intuition to correct a bias). It is hard to transmit such abilities, and without good external proof of their existence or transmissibility we are right to be skeptical and withhold social credit in any given case, else we be misled or cheated.
I don’t think this is true; intuition + explicit reasoning may have more of a certain kind of inside view trust (if you model intuition as not having gears that can be trustable), but intuition alone can definitely develop more outside-view/reputational trust.
I don’t see it this way. I think that both intuition and explicit reasoning are relevant to both inside view and outside view. It’s just that the input of inside view is the inner structure of question and the input of outside view is the reference category inside which the question resides. People definitely use the outside view in debates by communicating it verbally, which is hard to do with pure intuition. I think that ideally you should use combine intuition with explicit reasoning and also combine inside view with outside view.
I also don’t think this is as clear cut as you’re making it sound; explicit reasoning is also subject to biases, and intuitions can be the things which offset biases. As a quick and dirty example, even if your explicit reasoning takes the form of mathematical proofs which are verifiable, you can have biases about 1. which ontologies you use as your models to write proofs about, 2. which things you focus on proving, and 3. which proofs you decide to give. You can also have intuitions which push to correct some of these biases. It is not the case that intuition → biased, explicit reasoning → unbiased.
You can certainly have biases about these things, but these things can be regarded as coming from your intuition. You can think of it as P vs. NP. Solving problems is hard but verifying solutions is easy. To solve a problem you have to use intuition, but to verify the solution you rely more on explicit reasoning. And since verifying is so much easier, there is much less room for bias.
Why do you trust your explicit intellectual reasoning any more than your intuitions?
I don’t really understand what point you’re trying to argue for with this. Is the conclusion ”...and therefore we shouldn’t talk about them?” or ”...and therefore we shouldn’t use them?” or what?
I agree that if I go around making a lot of decisions based on my intuitions it will be harder to explain those decisions to other people. There are situations in which I want to optimize very hard for making decisions that are explicable in this way (e.g. if I’m a business manager), but there are situations where I don’t, and if I behave as if I’m always in my-decisions-need-to-be-explicable mode then I am missing opportunities to grasp a lot of power.
Firstly, there are cases where you can definitely trust your explicit reasoning more than your intuitions. For example, if I prove a mathematical theorem than I trust it more than just having an intuition that the theorem is true. Similarly, if I use physics to compute something about a physical phenomenon, I trust it more than just having an intuition about the physical phenomenon.
For most questions you can’t really compute the answer. You need to use some combination of intuition and explicit reasoning. However, this combination is indeed more trustworthy than intuition alone, since it allows treating at least some aspects of the question with precision. Moreover, if you manage to analyze your intuition and understand its source, your know much better to which extent you should trust it for the question at hand. Finally, it is the explicit reasoning part which allows you to offset the biases that you know your reasoning to have, at least until you trained your intuition to offset these biases automatically (assuming this is possible at all).
The point I’m trying to argue is, if someone wants to promote Looking to the community as a useful concept and practice, then they should prepare to argue in favor of it using explicit intellectual reasoning.
I think the situation is much more complicated than this, at least for experts. Cf. Terence Tao’s description of the pre-rigorous, rigorous, and post-rigorous stages of mathematical development. Mathematical papers often have incorrect proofs of correct statements (and the proofs are often fixable), because mathematicians’ intuitions about mathematics are so well-developed that they lead them to correct conjectures even when attempts to write down proofs go awry because in a long proof there are many opportunities to make mistakes. My experience has definitely been that the longer a proof / computation gets the more I trust my intuitions if they happen to disagree. (But of course I trained my intuitions on many previous proofs / computations.)
Why do you believe this? Have you actually tried? As query says, in many situations (say, social skills), adding explicit reasoning to your intuitions can make you worse off, at least at first.
This is also not the position you started off with (not that I’m asking you for consistency, just noting that we started somewhere different than this and that’s how we got here). You asked:
This seems to imply a fairly different cognitive algorithm than “combine your intuition and your explicit reasoning” (which, to be clear, is a thing I actually do, but probably a different way than you), namely “let your explicit reasoning veto anything it doesn’t think is worth doing.” Why do you think this is a good idea? In my experience this is an opportunity for various parts of you to find clever explicit arguments for not doing things that they’re trying to avoid for unrelated reasons (e.g. fear of social judgment).
Again, why do you believe that this works?
Okay, but Kaj and Val have both been saying (and I agree) that doing this runs the risk of making it harder to actually communicate Looking itself. For now I am basically content to have people either decide that Looking is worth trying to understand and trying to understand it, or decide that it isn’t. But I get the sense that this would be unsatisfying to you in some way.
With explicit intellectual reasoning, there’s a chance for error correction. If someone’s initial reasoning is wrong, others can point it out or they can eventually realize it on their own with further reasoning, and it seems possible to make progress towards the truth over time this way. (See science, math, and philosophy.) I’m worried that if Looking is wrong on a some question and makes me unjustifiably certain about it as well as discount explicit reasoning about that question, I won’t be able to back out of that epistemic state.
I’m also worried that LW as a whole will get into such a state and not be able to back out of it, which makes me want to also discourage other people from trying Looking without first having an explicit understanding of its epistemic nature. I want to have answers to questions like:
How does Looking work (especially on questions that are not confined to the internals of one’s own mind)?
How confident should we be about the answers that Looking gives? (Do people tend to be overconfident about Looking and if so how should we correct for that both as individuals and as a community?)
If Looking gives systematically wrong answers to certain questions (i.e., most people get the same wrong answer via Looking), how will that eventually get corrected?
Here’s my prior:
P(hearing explicit reasoning about X makes it harder to learn to do X | X is a useful epistemic tool) is low
P(X claims that hearing explicit reasoning about X makes it harder to learn to do X | X is a memetic virus trying to evade my epistemic immune system) is high
So such a claim makes me update towards thinking that X is a memetic virus. This kind of reasoning unfortunately makes me less likely to be able to learn X in the (hopefully rare) situation where X was actually a useful epistemic tool, but I think that’s just a price I have to pay to maintain my epistemic hygiene?
There are lots of ways to find out that you’re wrong about something. Instead of doing explicit reasoning you can make predictions and run experiments. Looking doesn’t mean not being on board with beliefs having to pay rent.
Example: when I Look at people, I get a sense of what’s going on with them to cause them to behave in certain ways, and I can test that sense by using it to make predictions and running experiments to check them (e.g. asking them a certain kind of question to see their response), in addition to doing explicit reasoning and seeing if the explicit reasoning comes to similar conclusions. Looking is not meant to displace explicit reasoning, but it is a different tool than explicit reasoning, and sometimes I want to use one or the other or both.
Subexample: I met a guy recently at a circling workshop, and after he had said about 10 words I was highly confident, based on how I was reading his tone of voice and body language (which manifested as a feeling of distrust in my guts), and also based partly on his actual words, that he was doing a thing I would describe as “fake circling” (which I also used to do). My explicit reasoning agreed, especially once I learned more about his life circumstances (loosely, he was lonely in a way I expected to cause him to want to do “fake circling” as a way to feel connected to people).
I circled with him and told him I distrusted him twice, the second time when he was doing the fake circling thing, and the circle digested that for a bit. I didn’t tell him my hypothesis. Then he was in another circle that I wasn’t in where the circle independently revealed, and he agreed, that he was doing the thing I strongly suspected he was doing (but in a bit less detail than I had, I think). And he partially learned to stop, and my guts felt less distrust when he did. Then I told him my hypothesis in more detail and he agreed.
(If I were making predictions they would have been things like “I predict he’s not going to make any progress on the thing he came here to fix until he changes such that my guts stop distrusting him.” It’s tricky to score this prediction though.)
What’s unsatisfying about Kaj’s original post above as an answer to this question?
The framing of this question feels off to me. Looking is a source of data, not answers. What you do with that data is up to you, and you can apply as much explicit reasoning as you want once you even have access to the additional data at all.
We make predictions and run experiments.
Something also feels off to me about the framing of this question. Looking is not a monolithic thing. People’s minds are different, and some people will be able to use it well and some people won’t. There are supplementary skills it’s useful to have in addition to just being able to Look (for example, precisely the sort of epistemic skills that LWers already have). The question feels a bit like asking about whether reading books gives systematically wrong answers to certain questions. Well, it depends on what books you’re reading and how good you are at interpreting the contents of what you read!
This seems fine and sensible to me.
I think it’s a step in the right direction, but I’m not sure if his explanation is correct, or that different people are even talking about the same thing when they say “Looking”.
Take this example of Looking:
I had interpreted this to mean that you were getting the answer of “playing it well is Goodharting on truth-seeking” directly out of Looking. If that’s not the case, can you explain what the data was, and how that lead you to the conclusion of “playing it well is Goodharting on truth-seeking”? (I think Goodharting is almost certainly true and unavoidable to some extent in any social situation, and it wouldn’t be too hard to find, via our normal observations, intuitions and explicit reasoning, specific forms of Goodharting on LW. What additional data does Looking provide?)
But I’m not seeing people say “Here’s some data I gathered via Looking, which leads to hypothesis X and predictions Y; let’s test it by doing these experiments.” Instead they just say “I think X because of Looking.” like in the sentence I quoted above, or Val’s “one clear thing I noticed when I first intentionally Looked is that everyone has bodhicitta”.
I don’t have a cached answer to this; Looking is preverbal, so I have to do a separate cognitive task of introspection to give a verbal answer to this question. (I’m also somewhat more confident than I was that I’m doing the thing that Kaj and Val call Looking but certainly not 100% confident. Maybe 90%.)
Okay, so here’s an analogy: when I was in 8th grade I read Atlas Shrugged, and it successfully invaded my mind and turned me into an objectivist for several months. I went around saying things like “gift-giving is immoral” (I also gave people gifts, and refused to notice this discrepancy) and feeling very smug. At some point it… wore off? And then I looked back on my behavior, and now that there wasn’t this “I said an objectivist thing which meant it was the best thing” thing getting in the way, I thought to myself, what the fuck have I been doing? Then I decided I was too incompetent to do philosophy and resolved to not try doing it again until I got more life experience or something.
The moment of objectivism wearing off is a bit like what it feels like to Look at the LW epistemic game. I’m seeing the same things I always saw, in some sense, but there’s a distorting thing that was getting in the way that’s gone now (according to me), which changes the frame I’m using to process and verbally label what I’m seeing. Those verbal labels, which I assign in a separate cognitive step that takes place after the Looking, are something like “oh, look, we’re a bunch of monkeys slinging words around while being terrified that some of the words will cause us to have false beliefs or something, whatever that even means, and meanwhile the set of monkeys most worried about this is essentially disjoint from the set of monkeys posting updates about what they’re actually doing in the world with their beliefs.”
Getting slightly closer to the data itself, I’ve been seeing examples of people making arguments that feel to me like motivated reasoning (this is not the Looking step, the Looking facilitates feeling this way but it is not the same thing as feeling this way) in a way that feels similar to when people give fake justifications for their behavior in circles, and when I introspect on the flavor of the motivated reasoning I get “optimizing for accepting arguments that are outside-view defensible instead of optimizing for truth-seeking.” This is again not Looking, but it’s a thought I’ve been having since reading Inadequate Equilibria and the Hero Licensing dialogue in particular.
Then I check all this against my explicit reasoning, which agrees that Goodharting is easy and the default outcome in situations like this. The obvious problem, according to my explicit reasoning, is that there’s no easy way to gain status on LW by being really right about things—for example, if a prediction market was explicitly a big and important part of LW culture—and instead the way you gain status is by getting other LWers to agree with you, or maybe writing impressively in a certain way, which is very different.
The point is less that I couldn’t have arrived at this conclusion without Looking, and more that without Looking, it may never have occurred to me to even try (because maybe some part of me is worried that if I Look at the LW epistemic game it will become harder for me to play it, so I might lose status on LW, or something like that).
Framing things in terms of data, hypotheses, and predictions is a strong concession to the LW epistemic game that I am explicitly choosing to make right now for the sake of ease of communication, and not everyone’s going to make that choice all the time.
There’s a thing that can happen after you Look that you might call a “flash of insight”; you suddenly realize something in a way that feels similar to the way proofs-by-picture can cause you to suddenly see the truth of a mathematical fact. Of course this is an opaque process and you’d be justified in not trusting it in yourself and others, but in the same way that you’d be justified in not trusting your intuitions or the intuitions of others generally. That’s not specific to Looking.
“Everyone has bodhicitta,” to the extent that I understand what that means, does seem to me to be a hypothesis with testable predictions, although those predictions are somewhat subtle. Val does describe a few things after your quote that can be interpreted as such predictions. It’s also something else that’s less of a belief and more of a particular way of orienting towards people, again as far as I understand it.
I’m still not sure what exactly was the data that you got from Looking. You said previously “What you do with that data is up to you, and you can apply as much explicit reasoning as you want once you even have access to the additional data at all.” In order to apply explicit reasoning to some data we have to verbally describe it or give it some sort of external encoding, right? If so, can you give a description or encoding of just the raw data (or the least processed data that you have access to) that you got from Looking?
What’s the proposed alternative to framing things this way, and how does one correct epistemic errors in that frame? For example if Val says “one clear thing I noticed when I first intentionally Looked is that everyone has bodhicitta” and I want to ask him about data and predictions, but he doesn’t want to use that frame, what should I do instead?
I’m not seeing anything that look like testable predictions. Can you spell them out?
You can do less direct things, like having other nonverbal parts of your mind process the data, introspecting / Focusing to get some words out of those parts, and then doing explicit reasoning on the words.
I already tried to do that; the data gets processed into felt senses and I tried to give my Focusing labels for the felt senses. I probably didn’t do the best job but I don’t feel up to putting in the level of effort that feels like it would be necessary to do substantially better.
Here’s another analogy: if you’re face-blind, you’re getting the same raw sensory input from your eyes that everyone else is (up to variations between your eyes, whatever), but the part of most people’s minds explicitly dedicated to processing and recognizing faces is not active or at least is weak, so you can see a face and process it as “this face with this kind of eyes and this nose and this hair” where someone else would see the same face and process it as “Bob’s face.”
Looking is sort of like becoming less face-blind. (Only sort of, this is really not a great analogy.) And it’s unclear how one would go about communicating what’s different about your mind when this happens, other than “now it’s immediately clear to me that that’s Bob’s face, whereas before I would have had to use explicit reasoning to figure that out.”
Meet him in person and ask him to show you the way in which everyone has bodhicitta. (Of course you are fully justified in finding this too expensive / risky to try.)
Edit: I misunderstood Wei Dai’s question; see below.
I don’t have a good verbal description of the alternative frame (nor do I have only one alternative frame), but the way you correct epistemic errors in it is to smash into the territory repeatedly.
(There’s an additional thing of just not worrying about epistemic errors as such very much. Tennis players don’t spend a lot of time asking themselves “but what if all of my beliefs about tennis are wrong tho?” because they just play a bunch of tennis and notice what works and what doesn’t instead, without ever explicitly thinking about their epistemics at all. This isn’t to say it might not benefit them to think about epistemics every once in awhile, but it’s not the mode they primarily operate in.)
What about this does not look like a testable prediction to you:
In practice, doesn’t that just translate to “shut up and don’t question it”?
I guess it depends on what field you’re working in so perhaps part of the disagreement here is caused by us coming from different backgrounds. I think in fields with short strong feedback cycles like tennis and math, where epistemic errors aren’t very costly, you can afford to not worry about epistemic errors much and just depend on smashing into the territory for error correction. In other fields like computer security and philosophy, where feedback cycles are weak or long, worrying about epistemic errors is one of the only things keeping you sane.
In principle we could have different sets of norms for different subject areas on LW, and “shut up and don’t question it” (or perhaps more charitably, “shut up and just try it”) could be acceptable for certain areas but not others. If that ends up happening I definitely want social epistemology itself to be an area where we worry a lot about epistemic errors.
I was asking about how epistemic errors caused by Looking can be corrected. I think in that context “prediction” has to literally mean prediction, of a future observation, and not something that’s already known like people building monuments to honor lost loved ones.
This seems really uncharitable, by far the least charitable you’ve been in this conversation so far (where I’ve generally been 100% happy with your behavior on the meta level). I have not asked you to shut up and I have not asked you not to question anything. You asked a question about what things look like in an alternative frame and I gave an honest answer from that frame; I don’t like being punished for answering the question you asked in the way you requested I answer it.
Edit: The above was based on a misunderstanding of Wei Dai’s question about what he should do instead; see below.
Some things are just hard to transmit except in person, and there are plenty of totally unobjectionable examples of this phenomenon.
Feedback cycles in circling are very short, although pretty noisy unless the facilitator is quite skilled. Feedback cycles in ordinary social interaction can also be very short, although even noisier.
To clarify, I wasn’t saying that you were doing either of those things. My point was that you seemed to be proposing an epistemic norm whose practical effect would be similar to people being allowed to say “shut up and don’t question it”, namely that it would make it very hard to question certain conclusions and correct potential errors. (Again, I don’t think you’re doing this now, just proposing it as something that should be acceptable.)
Some examples please? I honestly can’t think of anything I know that can only be transmitted in person.
I don’t know that I was proposing an epistemic norm. What I did was tell you what interaction with the territory you would need to have in order to be able to understand a thing, in the same way that if we lived in a village where nothing was colored red and you asked me “what would I have to do to understand the ineffable nature of redness?” I might say “go over to the next village and ask to see their red thing.”
Playing basketball? Carpentry? Singing? Martial arts? There are plenty of physical skills you could try teaching online, you probably wouldn’t get very far trying to teach them via text, probably somewhat farther via video, but in-person instruction, especially because it allows for substantial interaction and short feedback cycles, is really hard to replace.
I am consistently surprised at how different my intuitions on this topic are from the people I’ve been disagreeing with here. My prior is pretty strongly that most interesting skills can only be taught to a high level of competence in person, and that appearances to the contrary have been skewed by the availability heuristic because of school, etc. This seems to me like a totally unobjectionable point and yet it keeps coming up, possibly as a crux even.
There seems to be a related thing about people consistently expecting inferential / experiential distances to be short, when again my prior is that there’s no reason to expect either of these things to be true most of the time. And a third related thing where people keep expecting skill at X to translate into skill at explaining X.
To be very, very clear about this: I am in fact not asking you to update strongly in favor of any of the claims I or others have made about Looking or related topics, because I in fact think not enough evidence has been produced for such strong updates, and that the strongest such evidence can really only be transmitted in person (or rather, that I currently lack the skill to produce satisfying evidence in any way other than in person). I view what I’ve been doing as proposing hypotheses that people can consider, experiment with, or reject in whatever way they want, and also defending the ability of other people to consider, experiment with, etc. these hypotheses without being labeled epistemically suspect.
In that case there was a misunderstanding somewhere. Here’s my understanding/summary of our course of conversation: I said that explicit reasoning is useful for error correction. You said we can apply explicit reasoning to the data generated by Looking, and also check predictions for error correction. I said people who talk about Looking don’t tend to talk in terms of data, hypothesis and prediction. You said they may not want to use that frame. I asked what I should ask about instead (meaning how else can I try to encourage error correction, since that was the reason for wanting to ask about data and prediction in the first place). You said “Meet him in person and ask him to show you the way in which everyone has bodhicitta.” I interpreted that as a proposed alternative (or addition) to the norm of asking for data and predictions when someone proposes a new idea.
I guess the misunderstanding happened when I asked you “what should I do instead?” and you interpreted that as asking how can I understand Looking and bodhicitta, but what I actually meant was how can I encourage error correction in case Val was wrong about everyone having bodhicitta, and he doesn’t want to use the frame of data, hypothesis and prediction. I think “Meet him in person and ask him to show you the way in which everyone has bodhicitta.” would not serve my purpose because 1) in most cases nobody would be willing to do that so most new ideas would go unchallenged and 2) it wouldn’t accomplish the goal of error correction if Looking causes most people to make the same errors.
Hopefully that clears up the misunderstanding, in which case do you want to try answering my question again?
Oh. Yes, that’s exactly what happened. Thanks for writing down that summary.
I don’t really have a good answer to this question (if I did, it would be “try to encourage Val to use the frame of data, hypothesis and prediction, just don’t expect him to do it all the time”) so I’ll just say some thoughts. In my version of the frame Val is using there’s something a bit screwy about thinking of “everyone has bodhicitta” as a belief / hypothesis that makes testable predictions. That’s not quite the data type of that assertion; it’s a data type imported over from the LW epistemic frame and it’s not entirely natural here.
Here’s a related example that might be easier to think about: consider the assertion “everyone wants to be loved.” Interpreted too literally, it’s easy to find counterexamples: some people will claim to be terrified of the idea of being loved (for example, because in their lives the people who love them, like their parents, have consistently hurt them), and other people will claim to not care one way or the other, and on some level they may even be right. But there’s a sense in which these are defensive adaptations built on top of an underlying desire to be loved, which is plausibly a human universal for sensible evo-psych reasons (if your tribe loves you they won’t kick you out, they’ll take care of you even if you stop contributing temporarily because of sickness or injury, etc). And there’s an additional sense in which thinking in terms of this evo-psych model, while helpful as a sanity check, misses the point, because it doesn’t really capture the internal experience of being a human who wants to be loved, and seeing that internal experience from the outside as another human.
So one way to orient is that “everyone wants to be loved” is partially a hypothesis that makes testable predictions, suitably interpreted, but it’s also a particular choice of orienting towards other humans: choosing to pay attention to the level at which people want to be loved, as opposed to the level at which people will make all sorts of claims about their desire to be loved.
A related way of orienting towards it is that it’s a Focusing label for a felt sense, which is much closer to the data type of “everyone has bodhicitta” as I understand it. Said another way, it’s poetry. That doesn’t mean it doesn’t have epistemic content—a Val who realizes that everyone has bodhicitta anticipates somewhat different behavior from his fellow humans than a Val who doesn’t—but it does mean the epistemic content may be difficult to verbally summarize.
I agree that the situation is more complicated, I disagree that it is “much more complicated”. Yes, mathematicians rely on intuition to feel in the gaps in proofs and to seek out the errors in proofs. And yet, it is uncontroversial that having a proof should make you much more confident in a mathematical statement than just having an intuition. In reality, there is a spectrum that goes roughly “intuition that T is correct” → “informal argument for T” → “idea for how to prove T” → “sketch of a proof of T” → “unvetted proof of T” → “vetted, peer-reviewed proof of T” → “machine verifiable formal proof of T”.
Have I actually tried what? As to why I believe this, I think I already gave an “explicit reasoning” argument: and, yes, my intuition and life experience confirm this although this is not something that I can transmit to you directly.
This is a wrong way to look at this. Intuition and explicit reasoning are not two separate judges that give two separate verdicts. Combining intuition and explicit reasoning doesn’t mean averaging the results. The way it works is, when your intuition and reasoning disagree, you should try to understand why. You should pit them against each other and let them fight it out, and in the end you have something that resembles a system of formal arguments with intuition answering some sub-queries, and your reasoning and intuition both endorse the result. This is what I mean by “understanding on an intellectual level”.
I don’t insist that you only use explicit reasoning. Feel free use metaphors, koans, poetry and whatnot. But you should also explain it with explicit reasoning.
Well, if you are saying “I don’t want to convince everyone or even the most” that’s your prerogative of course. I just feel that the point of this forum is trying to have discussions whose insights will percolate across the entire community. Also I am personally interested in understanding what Looking is about and I feel that the explanations given so far leave me somewhat confused (although this last attempt by Kaj was significant progress).
I don’t think this is true; intuition + explicit reasoning may have more of a certain kind of inside view trust (if you model intuition as not having gears that can be trustable), but intuition alone can definitely develop more outside-view/reputational trust. Sometimes explicitly reasoning about the thing makes you clearly worse at it, and you can account for this over time.
I also don’t think this is as clear cut as you’re making it sound; explicit reasoning is also subject to biases, and intuitions can be the things which offset biases. As a quick and dirty example, even if your explicit reasoning takes the form of mathematical proofs which are verifiable, you can have biases about 1. which ontologies you use as your models to write proofs about, 2. which things you focus on proving, and 3. which proofs you decide to give. You can also have intuitions which push to correct some of these biases. It is not the case that intuition → biased, explicit reasoning → unbiased.
Explicit reflection is indeed a powerful tool, but I think there’s a tendency to confuse legibility with ability; someone can illegibly to others or themselves have the capacity to do something (like use an intuition to correct a bias). It is hard to transmit such abilities, and without good external proof of their existence or transmissibility we are right to be skeptical and withhold social credit in any given case, else we be misled or cheated.
I don’t see it this way. I think that both intuition and explicit reasoning are relevant to both inside view and outside view. It’s just that the input of inside view is the inner structure of question and the input of outside view is the reference category inside which the question resides. People definitely use the outside view in debates by communicating it verbally, which is hard to do with pure intuition. I think that ideally you should use combine intuition with explicit reasoning and also combine inside view with outside view.
You can certainly have biases about these things, but these things can be regarded as coming from your intuition. You can think of it as P vs. NP. Solving problems is hard but verifying solutions is easy. To solve a problem you have to use intuition, but to verify the solution you rely more on explicit reasoning. And since verifying is so much easier, there is much less room for bias.