What’s unsatisfying about Kaj’s original post above as an answer to this question?
I think it’s a step in the right direction, but I’m not sure if his explanation is correct, or that different people are even talking about the same thing when they say “Looking”.
The framing of this question feels off to me. Looking is a source of data, not answers. What you do with that data is up to you, and you can apply as much explicit reasoning as you want once you even have access to the additional data at all.
Take this example of Looking:
The point of being able to Look at the LW epistemic game, from within the point of view of the LW epistemic game, is precisely to see the ways in which playing it well is Goodharting on truth-seeking.
I had interpreted this to mean that you were getting the answer of “playing it well is Goodharting on truth-seeking” directly out of Looking. If that’s not the case, can you explain what the data was, and how that lead you to the conclusion of “playing it well is Goodharting on truth-seeking”? (I think Goodharting is almost certainly true and unavoidable to some extent in any social situation, and it wouldn’t be too hard to find, via our normal observations, intuitions and explicit reasoning, specific forms of Goodharting on LW. What additional data does Looking provide?)
I had interpreted this to mean that you were getting the answer of “playing it well is Goodharting on truth-seeking” directly out of Looking. If that’s not the case, can you explain what the data was, and how that lead you to the conclusion of “playing it well is Goodharting on truth-seeking”? (I think Goodharting is almost certainly true and unavoidable to some extent in any social situation, and it wouldn’t be too hard to find, via our normal intuitions and explicit reasoning, specific forms of Goodharting on LW. What additional data does Looking provide?)
I don’t have a cached answer to this; Looking is preverbal, so I have to do a separate cognitive task of introspection to give a verbal answer to this question. (I’m also somewhat more confident than I was that I’m doing the thing that Kaj and Val call Looking but certainly not 100% confident. Maybe 90%.)
Okay, so here’s an analogy: when I was in 8th grade I read Atlas Shrugged, and it successfully invaded my mind and turned me into an objectivist for several months. I went around saying things like “gift-giving is immoral” (I also gave people gifts, and refused to notice this discrepancy) and feeling very smug. At some point it… wore off? And then I looked back on my behavior, and now that there wasn’t this “I said an objectivist thing which meant it was the best thing” thing getting in the way, I thought to myself, what the fuck have I been doing? Then I decided I was too incompetent to do philosophy and resolved to not try doing it again until I got more life experience or something.
The moment of objectivism wearing off is a bit like what it feels like to Look at the LW epistemic game. I’m seeing the same things I always saw, in some sense, but there’s a distorting thing that was getting in the way that’s gone now (according to me), which changes the frame I’m using to process and verbally label what I’m seeing. Those verbal labels, which I assign in a separate cognitive step that takes place after the Looking, are something like “oh, look, we’re a bunch of monkeys slinging words around while being terrified that some of the words will cause us to have false beliefs or something, whatever that even means, and meanwhile the set of monkeys most worried about this is essentially disjoint from the set of monkeys posting updates about what they’re actually doing in the world with their beliefs.”
Getting slightly closer to the data itself, I’ve been seeing examples of people making arguments that feel to me like motivated reasoning (this is not the Looking step, the Looking facilitates feeling this way but it is not the same thing as feeling this way) in a way that feels similar to when people give fake justifications for their behavior in circles, and when I introspect on the flavor of the motivated reasoning I get “optimizing for accepting arguments that are outside-view defensible instead of optimizing for truth-seeking.” This is again not Looking, but it’s a thought I’ve been having since reading Inadequate Equilibria and the Hero Licensing dialogue in particular.
Then I check all this against my explicit reasoning, which agrees that Goodharting is easy and the default outcome in situations like this. The obvious problem, according to my explicit reasoning, is that there’s no easy way to gain status on LW by being really right about things—for example, if a prediction market was explicitly a big and important part of LW culture—and instead the way you gain status is by getting other LWers to agree with you, or maybe writing impressively in a certain way, which is very different.
The point is less that I couldn’t have arrived at this conclusion without Looking, and more that without Looking, it may never have occurred to me to even try (because maybe some part of me is worried that if I Look at the LW epistemic game it will become harder for me to play it, so I might lose status on LW, or something like that).
Framing things in terms of data, hypotheses, and predictions is a strong concession to the LW epistemic game that I am explicitly choosing to make right now for the sake of ease of communication, and not everyone’s going to make that choice all the time.
There’s a thing that can happen after you Look that you might call a “flash of insight”; you suddenly realize something in a way that feels similar to the way proofs-by-picture can cause you to suddenly see the truth of a mathematical fact. Of course this is an opaque process and you’d be justified in not trusting it in yourself and others, but in the same way that you’d be justified in not trusting your intuitions or the intuitions of others generally. That’s not specific to Looking.
“Everyone has bodhicitta,” to the extent that I understand what that means, does seem to me to be a hypothesis with testable predictions, although those predictions are somewhat subtle. Val does describe a few things after your quote that can be interpreted as such predictions. It’s also something else that’s less of a belief and more of a particular way of orienting towards people, again as far as I understand it.
I’m still not sure what exactly was the data that you got from Looking. You said previously “What you do with that data is up to you, and you can apply as much explicit reasoning as you want once you even have access to the additional data at all.” In order to apply explicit reasoning to some data we have to verbally describe it or give it some sort of external encoding, right? If so, can you give a description or encoding of just the raw data (or the least processed data that you have access to) that you got from Looking?
Framing things in terms of data, hypotheses, and predictions is a strong concession to the LW epistemic game that I am explicitly choosing to make right now for the sake of ease of communication, and not everyone’s going to make that choice all the time.
What’s the proposed alternative to framing things this way, and how does one correct epistemic errors in that frame? For example if Val says “one clear thing I noticed when I first intentionally Looked is that everyone has bodhicitta” and I want to ask him about data and predictions, but he doesn’t want to use that frame, what should I do instead?
Val does describe a few things after your quote that can be interpreted as such predictions.
I’m not seeing anything that look like testable predictions. Can you spell them out?
In order to apply explicit reasoning to some data we have to verbally describe it or give it some sort of external encoding, right?
You can do less direct things, like having other nonverbal parts of your mind process the data, introspecting / Focusing to get some words out of those parts, and then doing explicit reasoning on the words.
If so, can you give a description or encoding of just the raw data (or the least processed data that you have access to) that you got from Looking?
I already tried to do that; the data gets processed into felt senses and I tried to give my Focusing labels for the felt senses. I probably didn’t do the best job but I don’t feel up to putting in the level of effort that feels like it would be necessary to do substantially better.
Here’s another analogy: if you’re face-blind, you’re getting the same raw sensory input from your eyes that everyone else is (up to variations between your eyes, whatever), but the part of most people’s minds explicitly dedicated to processing and recognizing faces is not active or at least is weak, so you can see a face and process it as “this face with this kind of eyes and this nose and this hair” where someone else would see the same face and process it as “Bob’s face.”
Looking is sort of like becoming less face-blind. (Only sort of, this is really not a great analogy.) And it’s unclear how one would go about communicating what’s different about your mind when this happens, other than “now it’s immediately clear to me that that’s Bob’s face, whereas before I would have had to use explicit reasoning to figure that out.”
What’s the proposed alternative to framing things this way, and how does one correct epistemic errors in that frame? For example if Val says “one clear thing I noticed when I first intentionally Looked is that everyone has bodhicitta” and I want to ask him about data and predictions, but he doesn’t want to use that frame, what should I do instead?
Meet him in person and ask him to show you the way in which everyone has bodhicitta. (Of course you are fully justified in finding this too expensive / risky to try.)
Edit: I misunderstood Wei Dai’s question; see below.
I don’t have a good verbal description of the alternative frame (nor do I have only one alternative frame), but the way you correct epistemic errors in it is to smash into the territory repeatedly.
(There’s an additional thing of just not worrying about epistemic errors as such very much. Tennis players don’t spend a lot of time asking themselves “but what if all of my beliefs about tennis are wrong tho?” because they just play a bunch of tennis and notice what works and what doesn’t instead, without ever explicitly thinking about their epistemics at all. This isn’t to say it might not benefit them to think about epistemics every once in awhile, but it’s not the mode they primarily operate in.)
I’m not seeing anything that look like testable predictions. Can you spell them out?
What about this does not look like a testable prediction to you:
Meet him in person and ask him to show you the way in which everyone has bodhicitta. (Of course you are fully justified in finding this too expensive / risky to try.)
In practice, doesn’t that just translate to “shut up and don’t question it”?
(There’s an additional thing of just not worrying about epistemic errors as such very much. Tennis players don’t spend a lot of time asking themselves “but what if all of my beliefs about tennis are wrong tho?” because they just play a bunch of tennis and notice what works and what doesn’t instead, without ever explicitly thinking about their epistemics at all. This isn’t to say it might not benefit them to think about epistemics every once in awhile, but it’s not the mode they primarily operate in.)
I guess it depends on what field you’re working in so perhaps part of the disagreement here is caused by us coming from different backgrounds. I think in fields with short strong feedback cycles like tennis and math, where epistemic errors aren’t very costly, you can afford to not worry about epistemic errors much and just depend on smashing into the territory for error correction. In other fields like computer security and philosophy, where feedback cycles are weak or long, worrying about epistemic errors is one of the only things keeping you sane.
In principle we could have different sets of norms for different subject areas on LW, and “shut up and don’t question it” (or perhaps more charitably, “shut up and just try it”) could be acceptable for certain areas but not others. If that ends up happening I definitely want social epistemology itself to be an area where we worry a lot about epistemic errors.
What about this does not look like a testable prediction to you:
I was asking about how epistemic errors caused by Looking can be corrected. I think in that context “prediction” has to literally mean prediction, of a future observation, and not something that’s already known like people building monuments to honor lost loved ones.
In practice, doesn’t that just translate to “shut up and don’t question it”?
This seems really uncharitable, by far the least charitable you’ve been in this conversation so far (where I’ve generally been 100% happy with your behavior on the meta level). I have not asked you to shut up and I have not asked you not to question anything. You asked a question about what things look like in an alternative frame and I gave an honest answer from that frame; I don’t like being punished for answering the question you asked in the way you requested I answer it.
Edit: The above was based on a misunderstanding of Wei Dai’s question about what he should do instead; see below.
Some things are just hard to transmit except in person, and there are plenty of totally unobjectionable examples of this phenomenon.
In other fields like computer security and philosophy, where feedback cycles are weak or long, worrying about epistemic errors is one of the only things keeping you sane.
Feedback cycles in circling are very short, although pretty noisy unless the facilitator is quite skilled. Feedback cycles in ordinary social interaction can also be very short, although even noisier.
I have not asked you to shut up and I have not asked you not to question anything.
To clarify, I wasn’t saying that you were doing either of those things. My point was that you seemed to be proposing an epistemic norm whose practical effect would be similar to people being allowed to say “shut up and don’t question it”, namely that it would make it very hard to question certain conclusions and correct potential errors. (Again, I don’t think you’re doing this now, just proposing it as something that should be acceptable.)
Some things are just hard to transmit except in person, and there are plenty of totally unobjectionable examples of this phenomenon.
Some examples please? I honestly can’t think of anything I know that can only be transmitted in person.
My point was that you seemed to be proposing an epistemic norm whose practical effect would be similar to people being allowed to say “shut up and don’t question it”, namely that it would make it very hard to question certain conclusions and correct potential errors.
I don’t know that I was proposing an epistemic norm. What I did was tell you what interaction with the territory you would need to have in order to be able to understand a thing, in the same way that if we lived in a village where nothing was colored red and you asked me “what would I have to do to understand the ineffable nature of redness?” I might say “go over to the next village and ask to see their red thing.”
Some examples please? I honestly can’t think of anything I know that can only be transmitted in person.
Playing basketball? Carpentry? Singing? Martial arts? There are plenty of physical skills you could try teaching online, you probably wouldn’t get very far trying to teach them via text, probably somewhat farther via video, but in-person instruction, especially because it allows for substantial interaction and short feedback cycles, is really hard to replace.
I am consistently surprised at how different my intuitions on this topic are from the people I’ve been disagreeing with here. My prior is pretty strongly that most interesting skills can only be taught to a high level of competence in person, and that appearances to the contrary have been skewed by the availability heuristic because of school, etc. This seems to me like a totally unobjectionable point and yet it keeps coming up, possibly as a crux even.
There seems to be a related thing about people consistently expecting inferential / experiential distances to be short, when again my prior is that there’s no reason to expect either of these things to be true most of the time. And a third related thing where people keep expecting skill at X to translate into skill at explaining X.
To be very, very clear about this: I am in fact not asking you to update strongly in favor of any of the claims I or others have made about Looking or related topics, because I in fact think not enough evidence has been produced for such strong updates, and that the strongest such evidence can really only be transmitted in person (or rather, that I currently lack the skill to produce satisfying evidence in any way other than in person). I view what I’ve been doing as proposing hypotheses that people can consider, experiment with, or reject in whatever way they want, and also defending the ability of other people to consider, experiment with, etc. these hypotheses without being labeled epistemically suspect.
I don’t know that I was proposing an epistemic norm.
In that case there was a misunderstanding somewhere. Here’s my understanding/summary of our course of conversation: I said that explicit reasoning is useful for error correction. You said we can apply explicit reasoning to the data generated by Looking, and also check predictions for error correction. I said people who talk about Looking don’t tend to talk in terms of data, hypothesis and prediction. You said they may not want to use that frame. I asked what I should ask about instead (meaning how else can I try to encourage error correction, since that was the reason for wanting to ask about data and prediction in the first place). You said “Meet him in person and ask him to show you the way in which everyone has bodhicitta.” I interpreted that as a proposed alternative (or addition) to the norm of asking for data and predictions when someone proposes a new idea.
I guess the misunderstanding happened when I asked you “what should I do instead?” and you interpreted that as asking how can I understand Looking and bodhicitta, but what I actually meant was how can I encourage error correction in case Val was wrong about everyone having bodhicitta, and he doesn’t want to use the frame of data, hypothesis and prediction. I think “Meet him in person and ask him to show you the way in which everyone has bodhicitta.” would not serve my purpose because 1) in most cases nobody would be willing to do that so most new ideas would go unchallenged and 2) it wouldn’t accomplish the goal of error correction if Looking causes most people to make the same errors.
Hopefully that clears up the misunderstanding, in which case do you want to try answering my question again?
I guess the misunderstanding happened when I asked you “what should I do instead?” and you interpreted that as asking how can I understand Looking and bodhicitta, but what I actually meant was how can I encourage error correction in case Val was wrong about everyone having bodhicitta, and he doesn’t want to use the frame of data, hypothesis and prediction.
Oh. Yes, that’s exactly what happened. Thanks for writing down that summary.
I don’t really have a good answer to this question (if I did, it would be “try to encourage Val to use the frame of data, hypothesis and prediction, just don’t expect him to do it all the time”) so I’ll just say some thoughts. In my version of the frame Val is using there’s something a bit screwy about thinking of “everyone has bodhicitta” as a belief / hypothesis that makes testable predictions. That’s not quite the data type of that assertion; it’s a data type imported over from the LW epistemic frame and it’s not entirely natural here.
Here’s a related example that might be easier to think about: consider the assertion “everyone wants to be loved.” Interpreted too literally, it’s easy to find counterexamples: some people will claim to be terrified of the idea of being loved (for example, because in their lives the people who love them, like their parents, have consistently hurt them), and other people will claim to not care one way or the other, and on some level they may even be right. But there’s a sense in which these are defensive adaptations built on top of an underlying desire to be loved, which is plausibly a human universal for sensible evo-psych reasons (if your tribe loves you they won’t kick you out, they’ll take care of you even if you stop contributing temporarily because of sickness or injury, etc). And there’s an additional sense in which thinking in terms of this evo-psych model, while helpful as a sanity check, misses the point, because it doesn’t really capture the internal experience of being a human who wants to be loved, and seeing that internal experience from the outside as another human.
So one way to orient is that “everyone wants to be loved” is partially a hypothesis that makes testable predictions, suitably interpreted, but it’s also a particular choice of orienting towards other humans: choosing to pay attention to the level at which people want to be loved, as opposed to the level at which people will make all sorts of claims about their desire to be loved.
A related way of orienting towards it is that it’s a Focusing label for a felt sense, which is much closer to the data type of “everyone has bodhicitta” as I understand it. Said another way, it’s poetry. That doesn’t mean it doesn’t have epistemic content—a Val who realizes that everyone has bodhicitta anticipates somewhat different behavior from his fellow humans than a Val who doesn’t—but it does mean the epistemic content may be difficult to verbally summarize.
I think it’s a step in the right direction, but I’m not sure if his explanation is correct, or that different people are even talking about the same thing when they say “Looking”.
Take this example of Looking:
I had interpreted this to mean that you were getting the answer of “playing it well is Goodharting on truth-seeking” directly out of Looking. If that’s not the case, can you explain what the data was, and how that lead you to the conclusion of “playing it well is Goodharting on truth-seeking”? (I think Goodharting is almost certainly true and unavoidable to some extent in any social situation, and it wouldn’t be too hard to find, via our normal observations, intuitions and explicit reasoning, specific forms of Goodharting on LW. What additional data does Looking provide?)
But I’m not seeing people say “Here’s some data I gathered via Looking, which leads to hypothesis X and predictions Y; let’s test it by doing these experiments.” Instead they just say “I think X because of Looking.” like in the sentence I quoted above, or Val’s “one clear thing I noticed when I first intentionally Looked is that everyone has bodhicitta”.
I don’t have a cached answer to this; Looking is preverbal, so I have to do a separate cognitive task of introspection to give a verbal answer to this question. (I’m also somewhat more confident than I was that I’m doing the thing that Kaj and Val call Looking but certainly not 100% confident. Maybe 90%.)
Okay, so here’s an analogy: when I was in 8th grade I read Atlas Shrugged, and it successfully invaded my mind and turned me into an objectivist for several months. I went around saying things like “gift-giving is immoral” (I also gave people gifts, and refused to notice this discrepancy) and feeling very smug. At some point it… wore off? And then I looked back on my behavior, and now that there wasn’t this “I said an objectivist thing which meant it was the best thing” thing getting in the way, I thought to myself, what the fuck have I been doing? Then I decided I was too incompetent to do philosophy and resolved to not try doing it again until I got more life experience or something.
The moment of objectivism wearing off is a bit like what it feels like to Look at the LW epistemic game. I’m seeing the same things I always saw, in some sense, but there’s a distorting thing that was getting in the way that’s gone now (according to me), which changes the frame I’m using to process and verbally label what I’m seeing. Those verbal labels, which I assign in a separate cognitive step that takes place after the Looking, are something like “oh, look, we’re a bunch of monkeys slinging words around while being terrified that some of the words will cause us to have false beliefs or something, whatever that even means, and meanwhile the set of monkeys most worried about this is essentially disjoint from the set of monkeys posting updates about what they’re actually doing in the world with their beliefs.”
Getting slightly closer to the data itself, I’ve been seeing examples of people making arguments that feel to me like motivated reasoning (this is not the Looking step, the Looking facilitates feeling this way but it is not the same thing as feeling this way) in a way that feels similar to when people give fake justifications for their behavior in circles, and when I introspect on the flavor of the motivated reasoning I get “optimizing for accepting arguments that are outside-view defensible instead of optimizing for truth-seeking.” This is again not Looking, but it’s a thought I’ve been having since reading Inadequate Equilibria and the Hero Licensing dialogue in particular.
Then I check all this against my explicit reasoning, which agrees that Goodharting is easy and the default outcome in situations like this. The obvious problem, according to my explicit reasoning, is that there’s no easy way to gain status on LW by being really right about things—for example, if a prediction market was explicitly a big and important part of LW culture—and instead the way you gain status is by getting other LWers to agree with you, or maybe writing impressively in a certain way, which is very different.
The point is less that I couldn’t have arrived at this conclusion without Looking, and more that without Looking, it may never have occurred to me to even try (because maybe some part of me is worried that if I Look at the LW epistemic game it will become harder for me to play it, so I might lose status on LW, or something like that).
Framing things in terms of data, hypotheses, and predictions is a strong concession to the LW epistemic game that I am explicitly choosing to make right now for the sake of ease of communication, and not everyone’s going to make that choice all the time.
There’s a thing that can happen after you Look that you might call a “flash of insight”; you suddenly realize something in a way that feels similar to the way proofs-by-picture can cause you to suddenly see the truth of a mathematical fact. Of course this is an opaque process and you’d be justified in not trusting it in yourself and others, but in the same way that you’d be justified in not trusting your intuitions or the intuitions of others generally. That’s not specific to Looking.
“Everyone has bodhicitta,” to the extent that I understand what that means, does seem to me to be a hypothesis with testable predictions, although those predictions are somewhat subtle. Val does describe a few things after your quote that can be interpreted as such predictions. It’s also something else that’s less of a belief and more of a particular way of orienting towards people, again as far as I understand it.
I’m still not sure what exactly was the data that you got from Looking. You said previously “What you do with that data is up to you, and you can apply as much explicit reasoning as you want once you even have access to the additional data at all.” In order to apply explicit reasoning to some data we have to verbally describe it or give it some sort of external encoding, right? If so, can you give a description or encoding of just the raw data (or the least processed data that you have access to) that you got from Looking?
What’s the proposed alternative to framing things this way, and how does one correct epistemic errors in that frame? For example if Val says “one clear thing I noticed when I first intentionally Looked is that everyone has bodhicitta” and I want to ask him about data and predictions, but he doesn’t want to use that frame, what should I do instead?
I’m not seeing anything that look like testable predictions. Can you spell them out?
You can do less direct things, like having other nonverbal parts of your mind process the data, introspecting / Focusing to get some words out of those parts, and then doing explicit reasoning on the words.
I already tried to do that; the data gets processed into felt senses and I tried to give my Focusing labels for the felt senses. I probably didn’t do the best job but I don’t feel up to putting in the level of effort that feels like it would be necessary to do substantially better.
Here’s another analogy: if you’re face-blind, you’re getting the same raw sensory input from your eyes that everyone else is (up to variations between your eyes, whatever), but the part of most people’s minds explicitly dedicated to processing and recognizing faces is not active or at least is weak, so you can see a face and process it as “this face with this kind of eyes and this nose and this hair” where someone else would see the same face and process it as “Bob’s face.”
Looking is sort of like becoming less face-blind. (Only sort of, this is really not a great analogy.) And it’s unclear how one would go about communicating what’s different about your mind when this happens, other than “now it’s immediately clear to me that that’s Bob’s face, whereas before I would have had to use explicit reasoning to figure that out.”
Meet him in person and ask him to show you the way in which everyone has bodhicitta. (Of course you are fully justified in finding this too expensive / risky to try.)
Edit: I misunderstood Wei Dai’s question; see below.
I don’t have a good verbal description of the alternative frame (nor do I have only one alternative frame), but the way you correct epistemic errors in it is to smash into the territory repeatedly.
(There’s an additional thing of just not worrying about epistemic errors as such very much. Tennis players don’t spend a lot of time asking themselves “but what if all of my beliefs about tennis are wrong tho?” because they just play a bunch of tennis and notice what works and what doesn’t instead, without ever explicitly thinking about their epistemics at all. This isn’t to say it might not benefit them to think about epistemics every once in awhile, but it’s not the mode they primarily operate in.)
What about this does not look like a testable prediction to you:
In practice, doesn’t that just translate to “shut up and don’t question it”?
I guess it depends on what field you’re working in so perhaps part of the disagreement here is caused by us coming from different backgrounds. I think in fields with short strong feedback cycles like tennis and math, where epistemic errors aren’t very costly, you can afford to not worry about epistemic errors much and just depend on smashing into the territory for error correction. In other fields like computer security and philosophy, where feedback cycles are weak or long, worrying about epistemic errors is one of the only things keeping you sane.
In principle we could have different sets of norms for different subject areas on LW, and “shut up and don’t question it” (or perhaps more charitably, “shut up and just try it”) could be acceptable for certain areas but not others. If that ends up happening I definitely want social epistemology itself to be an area where we worry a lot about epistemic errors.
I was asking about how epistemic errors caused by Looking can be corrected. I think in that context “prediction” has to literally mean prediction, of a future observation, and not something that’s already known like people building monuments to honor lost loved ones.
This seems really uncharitable, by far the least charitable you’ve been in this conversation so far (where I’ve generally been 100% happy with your behavior on the meta level). I have not asked you to shut up and I have not asked you not to question anything. You asked a question about what things look like in an alternative frame and I gave an honest answer from that frame; I don’t like being punished for answering the question you asked in the way you requested I answer it.
Edit: The above was based on a misunderstanding of Wei Dai’s question about what he should do instead; see below.
Some things are just hard to transmit except in person, and there are plenty of totally unobjectionable examples of this phenomenon.
Feedback cycles in circling are very short, although pretty noisy unless the facilitator is quite skilled. Feedback cycles in ordinary social interaction can also be very short, although even noisier.
To clarify, I wasn’t saying that you were doing either of those things. My point was that you seemed to be proposing an epistemic norm whose practical effect would be similar to people being allowed to say “shut up and don’t question it”, namely that it would make it very hard to question certain conclusions and correct potential errors. (Again, I don’t think you’re doing this now, just proposing it as something that should be acceptable.)
Some examples please? I honestly can’t think of anything I know that can only be transmitted in person.
I don’t know that I was proposing an epistemic norm. What I did was tell you what interaction with the territory you would need to have in order to be able to understand a thing, in the same way that if we lived in a village where nothing was colored red and you asked me “what would I have to do to understand the ineffable nature of redness?” I might say “go over to the next village and ask to see their red thing.”
Playing basketball? Carpentry? Singing? Martial arts? There are plenty of physical skills you could try teaching online, you probably wouldn’t get very far trying to teach them via text, probably somewhat farther via video, but in-person instruction, especially because it allows for substantial interaction and short feedback cycles, is really hard to replace.
I am consistently surprised at how different my intuitions on this topic are from the people I’ve been disagreeing with here. My prior is pretty strongly that most interesting skills can only be taught to a high level of competence in person, and that appearances to the contrary have been skewed by the availability heuristic because of school, etc. This seems to me like a totally unobjectionable point and yet it keeps coming up, possibly as a crux even.
There seems to be a related thing about people consistently expecting inferential / experiential distances to be short, when again my prior is that there’s no reason to expect either of these things to be true most of the time. And a third related thing where people keep expecting skill at X to translate into skill at explaining X.
To be very, very clear about this: I am in fact not asking you to update strongly in favor of any of the claims I or others have made about Looking or related topics, because I in fact think not enough evidence has been produced for such strong updates, and that the strongest such evidence can really only be transmitted in person (or rather, that I currently lack the skill to produce satisfying evidence in any way other than in person). I view what I’ve been doing as proposing hypotheses that people can consider, experiment with, or reject in whatever way they want, and also defending the ability of other people to consider, experiment with, etc. these hypotheses without being labeled epistemically suspect.
In that case there was a misunderstanding somewhere. Here’s my understanding/summary of our course of conversation: I said that explicit reasoning is useful for error correction. You said we can apply explicit reasoning to the data generated by Looking, and also check predictions for error correction. I said people who talk about Looking don’t tend to talk in terms of data, hypothesis and prediction. You said they may not want to use that frame. I asked what I should ask about instead (meaning how else can I try to encourage error correction, since that was the reason for wanting to ask about data and prediction in the first place). You said “Meet him in person and ask him to show you the way in which everyone has bodhicitta.” I interpreted that as a proposed alternative (or addition) to the norm of asking for data and predictions when someone proposes a new idea.
I guess the misunderstanding happened when I asked you “what should I do instead?” and you interpreted that as asking how can I understand Looking and bodhicitta, but what I actually meant was how can I encourage error correction in case Val was wrong about everyone having bodhicitta, and he doesn’t want to use the frame of data, hypothesis and prediction. I think “Meet him in person and ask him to show you the way in which everyone has bodhicitta.” would not serve my purpose because 1) in most cases nobody would be willing to do that so most new ideas would go unchallenged and 2) it wouldn’t accomplish the goal of error correction if Looking causes most people to make the same errors.
Hopefully that clears up the misunderstanding, in which case do you want to try answering my question again?
Oh. Yes, that’s exactly what happened. Thanks for writing down that summary.
I don’t really have a good answer to this question (if I did, it would be “try to encourage Val to use the frame of data, hypothesis and prediction, just don’t expect him to do it all the time”) so I’ll just say some thoughts. In my version of the frame Val is using there’s something a bit screwy about thinking of “everyone has bodhicitta” as a belief / hypothesis that makes testable predictions. That’s not quite the data type of that assertion; it’s a data type imported over from the LW epistemic frame and it’s not entirely natural here.
Here’s a related example that might be easier to think about: consider the assertion “everyone wants to be loved.” Interpreted too literally, it’s easy to find counterexamples: some people will claim to be terrified of the idea of being loved (for example, because in their lives the people who love them, like their parents, have consistently hurt them), and other people will claim to not care one way or the other, and on some level they may even be right. But there’s a sense in which these are defensive adaptations built on top of an underlying desire to be loved, which is plausibly a human universal for sensible evo-psych reasons (if your tribe loves you they won’t kick you out, they’ll take care of you even if you stop contributing temporarily because of sickness or injury, etc). And there’s an additional sense in which thinking in terms of this evo-psych model, while helpful as a sanity check, misses the point, because it doesn’t really capture the internal experience of being a human who wants to be loved, and seeing that internal experience from the outside as another human.
So one way to orient is that “everyone wants to be loved” is partially a hypothesis that makes testable predictions, suitably interpreted, but it’s also a particular choice of orienting towards other humans: choosing to pay attention to the level at which people want to be loved, as opposed to the level at which people will make all sorts of claims about their desire to be loved.
A related way of orienting towards it is that it’s a Focusing label for a felt sense, which is much closer to the data type of “everyone has bodhicitta” as I understand it. Said another way, it’s poetry. That doesn’t mean it doesn’t have epistemic content—a Val who realizes that everyone has bodhicitta anticipates somewhat different behavior from his fellow humans than a Val who doesn’t—but it does mean the epistemic content may be difficult to verbally summarize.