I think we’ve got some significant disagreements about “methodology of science” here. I think that much of this approach feels off to me, or at least concerning. I’ll try to write a bit more about this later (edit: I find myself tempted to write several things now . . .).
And I think trying to define “frame” more rigorously would be more likely to give people a new false sense that they understand what’s going on (a new “frame” to be locked into without noticing that you’re in a frame), than to do anything helpful.
E.g. defining “frame” more rigorously would let people know more specifically what you’re trying to talk about and help both you and them more specific/testable predictions and more quickly notice where the model fails in practice, etc. More precise definitions should lead to sticking your neck out more properly. Not that I’m an expert at doing this, but this feels like a better way to proceed even at the risk that people get anchored on bad models (I imagine a way to avoid this is to create, abandon, and iterate on lots of models, many which are mistaken and get abandoned, so people learn the habit.)
This is actually a good example of the sort of failure I’m worried about if I defined frame too cleanly. If you focus on “within gears mismatch” I predict you’ll do worse at resolving disagreements that aren’t about that.
So it’s only my sense of what that word refers to when people say it that has been shifted, not my models, expectations, or predictions of the world. That seems fine in this context (I dislike eroding meanings of words with precise meanings via imprecise usage, but “frame” isn’t in that class). The second sentence here doesn’t seem related to the first. To your concern though, I say “maybe?”, but I have a greater concern that if you don’t focus on how to address a single specific type of frame clash well enough, you won’t learn to address any of them very well.
I think we’ve got some significant disagreements about “methodology of science” here.
...
E.g. defining “frame” more rigorously would let people know more specifically what you’re trying to talk about and help both you and them more specific/testable predictions and more quickly notice where the model fails in practice, etc.
I think I’m mostly just not trying to do science at this point.
Or rather: this is still at the “roughly point in the direction of the thing I’m even trying to understand, and get help gathering more data”, point, rather than the “propose a hypothesis and make concrete predictions point.”
My models here are coming from maybe a total of… 10 datapoints, each of which had a lot of variables, and I don’t know which ones were the important ones. *I* don’t necessarily know what I mean by frame at this point, so being more specific just gives people an inaccurate representation of my own undertanding.
Like, if you’re early humans and you’re trying to first figure out zoology but you’re at the point where even if you’re like “I’m studying animals” and they’re like “what are animals” and… sure you can give them the best guess that will most likely be wrong (a la defining humans as featherless bipeds), but it’s more useful to just point at a few animals and say “things that are kinda like this.”
It might be useful to give “featherless biped” type definitions to begin the process of refining it into something real. (And I’ve done that, in the form of “ways of seeing/thinking/communicating”). But, it seems quite important to notice that featherless-biped-type-definitions are almost certainly wrong and should not be taken seriously except as an intellectual exercise.
I have a greater concern that if you don’t focus on how to address a single specific type of frame clash well enough, you won’t learn to address any of them very well.
I don’t we have the luxury of doing this – each disagreement I’ve found was fairly unique, and required learning a new set of things to notice and pay attention to, and stretched my conception of what it meant to disagree productively. I’ve partially resolved maybe 5 deep disagreements (but it’s still possible that many of my actions there were a waste of time or made things worse), and then not obviously even made any progress on another 5.
FWIW, here’s a rough sense of where the sequence is going:
There are very different ways of seeing the world.
It’s a generally important life skill to understand when other people are seeing the world differently.
It is (probably) also an important life skill to be able to see the world in a few different ways. (not necessarily any particular “few different” ways – there’s an experiential shift that comes from having had to learn to see a couple different ways that I think is really necessary for resolving disagreement and conflict and coexisting). Put another way, being able to “hold your frame as object.”
This seems really important for humanity generally (for cosmopolitan coexistence)
This seems differently important for the rationality community.
I think of LessWrong as “about the gears frame”, and that’s fine (and quite good). But the natural ways of seeing the gears frame tend to leave people sort of stuck or blind. LessWrong is about building a single-not-compartmentalized probabilistic model of the universe, but this needs to include the parts of the universe that gears-oriented-people tend not to notice as readily.”
It’s important to resolve the question “how do we have high epistemic standards about things that aren’t in the gears frame, or that can’t be made explicit enough to share between multiple people with different gear-frames.”
I don’t understand the question in the last point. I am being intentionally stupid and simple, what reason do you have to guess/believe that epistemic standards would be harder to apply to non-gear frames?
The motivating example there is “how to have high epistemic standards around introspection, since introspection isn’t publicly verifiable, but is also fairly important to practical rationality.” I think this is at least somewhat hard, and separate from being hard, it’s a domain that doesn’t have agreed upon rules.
(I realize it might be not be clear how that sentence followed from the previous point, since there’s nothing intrinsically non-gearsy about introspection. The issue is something like “at least some of the people exploring introspection techniques are also coming at it from fairly different frames, which can’t easily all communicate with each other. From the outside, it’s hard to tell the difference between ‘a person is wrong about facts’ and ‘a person’s frame is foreign.’”)
So the connection is “The straightforward way to increase epistemological competence is to talk about beliefs in detail. In introspection it is hard to apply this method because details can’t be effectively shared to get an understanding”. It seems to me it is not about gear-frames being special but that frames have preconditions to get them to work and an area that allows/permits a lot of frames makes it hard to hit any frames prequisities.
I think I’m mostly just not trying to do science at this point.
“Methodology of science” was in quotes because I meant more generally “process for figuring things out.” I think we do have different leanings there, but they won’t be quick to sort out, mostly because (at least in my case) they’re more leanings and untested instincts. So mostly just flagging for now.
*I* don’t necessarily know what I mean by frame at this point, so being more specific just gives people an inaccurate representation of my own understanding.
I think there are ways to be very “precise about your uncertainty” though.
I don’t we have the luxury of doing this – each disagreement I’ve found was fairly unique, and required learning a new set of things to notice and pay attention to, and stretched my conception of what it meant to disagree productively.
Hmm, it does seem that naturally occurring debates will be over all the place in a way that will make want you to study them all simultaneously.
I think we’ve got some significant disagreements about “methodology of science” here. I think that much of this approach feels off to me, or at least concerning. I’ll try to write a bit more about this later (edit: I find myself tempted to write several things now . . .).
E.g. defining “frame” more rigorously would let people know more specifically what you’re trying to talk about and help both you and them more specific/testable predictions and more quickly notice where the model fails in practice, etc. More precise definitions should lead to sticking your neck out more properly. Not that I’m an expert at doing this, but this feels like a better way to proceed even at the risk that people get anchored on bad models (I imagine a way to avoid this is to create, abandon, and iterate on lots of models, many which are mistaken and get abandoned, so people learn the habit.)
So it’s only my sense of what that word refers to when people say it that has been shifted, not my models, expectations, or predictions of the world. That seems fine in this context (I dislike eroding meanings of words with precise meanings via imprecise usage, but “frame” isn’t in that class). The second sentence here doesn’t seem related to the first. To your concern though, I say “maybe?”, but I have a greater concern that if you don’t focus on how to address a single specific type of frame clash well enough, you won’t learn to address any of them very well.
I think I’m mostly just not trying to do science at this point.
Or rather: this is still at the “roughly point in the direction of the thing I’m even trying to understand, and get help gathering more data”, point, rather than the “propose a hypothesis and make concrete predictions point.”
My models here are coming from maybe a total of… 10 datapoints, each of which had a lot of variables, and I don’t know which ones were the important ones. *I* don’t necessarily know what I mean by frame at this point, so being more specific just gives people an inaccurate representation of my own undertanding.
Like, if you’re early humans and you’re trying to first figure out zoology but you’re at the point where even if you’re like “I’m studying animals” and they’re like “what are animals” and… sure you can give them the best guess that will most likely be wrong (a la defining humans as featherless bipeds), but it’s more useful to just point at a few animals and say “things that are kinda like this.”
It might be useful to give “featherless biped” type definitions to begin the process of refining it into something real. (And I’ve done that, in the form of “ways of seeing/thinking/communicating”). But, it seems quite important to notice that featherless-biped-type-definitions are almost certainly wrong and should not be taken seriously except as an intellectual exercise.
I don’t we have the luxury of doing this – each disagreement I’ve found was fairly unique, and required learning a new set of things to notice and pay attention to, and stretched my conception of what it meant to disagree productively. I’ve partially resolved maybe 5 deep disagreements (but it’s still possible that many of my actions there were a waste of time or made things worse), and then not obviously even made any progress on another 5.
FWIW, here’s a rough sense of where the sequence is going:
There are very different ways of seeing the world.
It’s a generally important life skill to understand when other people are seeing the world differently.
It is (probably) also an important life skill to be able to see the world in a few different ways. (not necessarily any particular “few different” ways – there’s an experiential shift that comes from having had to learn to see a couple different ways that I think is really necessary for resolving disagreement and conflict and coexisting). Put another way, being able to “hold your frame as object.”
This seems really important for humanity generally (for cosmopolitan coexistence)
This seems differently important for the rationality community.
I think of LessWrong as “about the gears frame”, and that’s fine (and quite good). But the natural ways of seeing the gears frame tend to leave people sort of stuck or blind. LessWrong is about building a single-not-compartmentalized probabilistic model of the universe, but this needs to include the parts of the universe that gears-oriented-people tend not to notice as readily.”
It’s important to resolve the question “how do we have high epistemic standards about things that aren’t in the gears frame, or that can’t be made explicit enough to share between multiple people with different gear-frames.”
These seems pretty good and I think your current approach might suffice for this.
I don’t understand the question in the last point. I am being intentionally stupid and simple, what reason do you have to guess/believe that epistemic standards would be harder to apply to non-gear frames?
The motivating example there is “how to have high epistemic standards around introspection, since introspection isn’t publicly verifiable, but is also fairly important to practical rationality.” I think this is at least somewhat hard, and separate from being hard, it’s a domain that doesn’t have agreed upon rules.
(I realize it might be not be clear how that sentence followed from the previous point, since there’s nothing intrinsically non-gearsy about introspection. The issue is something like “at least some of the people exploring introspection techniques are also coming at it from fairly different frames, which can’t easily all communicate with each other. From the outside, it’s hard to tell the difference between ‘a person is wrong about facts’ and ‘a person’s frame is foreign.’”)
So the connection is “The straightforward way to increase epistemological competence is to talk about beliefs in detail. In introspection it is hard to apply this method because details can’t be effectively shared to get an understanding”. It seems to me it is not about gear-frames being special but that frames have preconditions to get them to work and an area that allows/permits a lot of frames makes it hard to hit any frames prequisities.
“Methodology of science” was in quotes because I meant more generally “process for figuring things out.” I think we do have different leanings there, but they won’t be quick to sort out, mostly because (at least in my case) they’re more leanings and untested instincts. So mostly just flagging for now.
I think there are ways to be very “precise about your uncertainty” though.
Hmm, it does seem that naturally occurring debates will be over all the place in a way that will make want you to study them all simultaneously.