I had a different conversation with Robin in the draft documents that I think was very relevant, but I can’t find that one either. Blerg.
Anyway, “the thing that resists expectation” is my current best way of identifying “the territory”, at least inside my own head. This has been true since gyroscopes and my study of fabricated options. I think the takeaway from that conversation I can’t find was something like, “The ease with which your incorrect expectations encounter resistance-that-you-perceive-as-resistance corresponds to the directness of your observation.” Which sounds to me a lot like what you’re saying here.
[I’m just gonna explicitly flag here that my confidence throughout this comment goes down, and my feelings of confusion and other signs that I’m swimming in pre-theoretic soup go up. I’d be surprised to find in a year that nothing I say here is outright false, even the things that are descriptions of my immediate experiences.]
I currently expect that ultimately, my concept of “the territory” needs to be one where math absolutely is a study of the territory, and if I’m using a version of the “territory” concept that lacks that property, I don’t have it right yet.
I’m not sure about math in general, but according to my current understanding, logic is is a study of maps. Uh, hm, I can’t belief report the previous sentence. Let’s try again: According to my current understanding, formal semantics, as a field, is a study of maps. No that’s not quite right either. (My understanding of) formal semantics is like, “We have these rules. What systems obey these rules? If some part of the world obeyed these rules, what might that part of the world look like?” So formal semantics makes maps of a special kind, a kind that follow sets of logical rules. It’s sort of backwards cartography. Instead of trying to draw pictures of the world, formal semantics is trying to draw pictures of how the world would be if it had certain properties. Like illustrating a novel, but for math kids instead of art kids.
Logic itself… Is chess a map? Is it even a drawing? I… do not think so, no! A logic is a set of formulation and derivation rules, and what a logician does is study the behaviors of that collection of entities. A logician is like, “How do the thingies behave if you swap this derivation rule for that one? What are the relationships between the old set and the new set? What happens if this particular bananas connective gets to be part of a well formed formula? Let’s find out!” (Maybe. I’m obviously talking about myself. I don’t know what real logicians do.) Maybe if you’re some kind of applied logician or something, you spend a lot of time trying to find logics that govern whatever bit of the world you’re interested in, like databases or something, and then you spend a bunch of time making maps so you can study the relationship of the logic and the map. (Is that what programmers are? Applied logicians in this sense?) But my point (it turns out) is that logicians are not mainly working with maps, any more than basket weavers are.
I’m even less clear on what math is than on what logic is, but my only attempts to understand what math is that I’ve ever been at all satisfied with have lead me to think of it as a special case of logic. Math is what happens when you pick a logic, then pick a set of assumptions, then just leave those foundational assumptions in place forever until you realize there’s something you don’t like about them, at which point maybe you lead a revolt and start a new religion where you’re not just trivially allowed to pick out the smallest number from each of a set of sets of natural numbers, or whatever. And if math is some kind of special case of logic, then, seems like it doesn’t have any more to do with maps than basket weaving does, either?
Anyway, perhaps that was all nonsense, I really have no idea. Hope some of it bumps into you in useful ways.
I found the conversation with Robin. (Or, rather, Robin found the conversation. Thanks, Robin.)
Robin: [shared with permission]
[talking about a bit I was trying to re-write in the next essay, Direct Observation]
First paragraph feels great.
the the last line, though, “The processing distance is finite” immediately raises the question of “WOAH what would infinite / extremely large processing distance be...?” and I’m off thinking about that.
Second paragraph: makes sense, and I agree with you, but I have some expectation of… explanation? You’ve stated that the processing distance is shorter, but without further words on how you know / what that’s made out of.
(And how does a person know? By poking something and examining the sensation, and comparing it to the sensation of planning something? It might be a good moment to prompt the reader to try it and examine the difference?)
Logan:
I tried to write about this, but it was long and clumsy. I’m not sure if I’ll take a second pass at it, if I’ll just leave it out (especially since it’s a pretty new and not-very-investigated line of thought for me). But I thought you’d like to see what I have:
Here’s a test for estimating processing distance: If your expectations happened to be incorrect, how easy would it be to feel the resistance of reality?
I have some expectations around what kinds of foods a cat will enjoy. If you’d asked me two weeks ago, “What treats should I offer my cat while helping him become more comfortable around vacuum cleaners?” I’d recommend little chunks of fresh chicken, beef, or seafood. That was my implicit belief for many years, while I did not actually know any cats.
Recently, while preparing to adopt a kitten, I looked into this question, and multiple websites confirmed that super high-value treats for cats include fresh poultry, meat, and seafood.
But when I actually got the kitten, reality resisted my expectations by literally spitting out chunks of fresh chicken, grilled steak, and shrimp, right in front of me. It would have been hard for me not to notice this happening. It may be true that cats in general enjoy fresh chicken; but it was immediately apparent to me, as I watched Adagio literally turn up his nose at the tasty morsel, that this one does not.
If I’d never offered Adagio chicken, I could have gone on believing indefinitely that he likes chicken. There would have been no opportunity to encounter resistance through my imagination, through books by cat experts, even through personal interactions with other cats. It was only by offering chicken directly to my real-life chicken-hating cat that I gained the opportunity to notice I was wrong.
When I imagined Adagio eating chicken, my imagination filled in a picture of him munching eagerly, even trying to swipe additional bits from my hand. The processing distances between cat palates and human imaginings of cats is quite large, so there’s little opportunity for expectation to encounter resistance. When I read accounts of cats, the distance is a bit less, because those words are downstream of observation of actual cats; if I’d been so wrong as to expect Adagio will like oranges, googling probably could have set me right. Cats supposedly hate citrus. (I offered him an orange just now, to be sure. He got up and backed away from it.)
But even if my priors are strong, and even if I’m very stubborn or invested in my expectations—if I’ve just spent a lot of money on fancy cat food featuring fresh chicken, for instance—I’d have to work pretty hard not to notice my surprise when I offer an actual chicken-hating cat an actual piece of chicken from my own hand.
[end attempted essay section, begin talking directly to Robin again]
so to answer your question, according to my working model, infinite processing distances are those that offer no opportunity whatsoever to encounter resistance from reality when your expectations are incorrect. the distance between snakes and cello sonatas is extremely large, because snakes don’t have ears. (not a perfect example, since they do have some other methods of sensing vibrations.) if a snake is somehow mistaken about the properties of a cello sonata, it will have to work extremely hard to even find an opportunity to discover this, nevermind making use of that opportunity. it might have to, i don’t know, learn to read musical notation, and even then a lot of potentially crucial information (like the overtones created by the shape of the cello body, or a particular artist’s interpretation) will not be available.
(i don’t know why i’m talking about snakes. deaf humans also exist.)
so it’s like, how tightly entangled is any given experience with the bit of reality you’re hoping to learn about from that experience? the totally sound-free experience of smelling a tasty cricket in front of your tongue is not very entangled with a musical performance, even if the performance is happening just meters away from you. a scratchy recording of that performance on an ancient beaten-up record heard decades later on the other side of the planet, much more so.
Robin:
(I did want to see this, yes!)
>”Here’s a test for estimating processing distance: ‘If your expectations happened to be incorrect, how easy would it be to feel the resistance of reality?’”
Oh, I like this. (Though, “the resistance of reality” throws a little ‘?’ For me when I look at it too closely; I can tell my felt-sense for “noticing surprise/confusion” is bubbling up to take the space of the phrase when I don’t look too closely, though, so I must think you mean something like that.)
>”When I read accounts of cats, the distance is a bit less, because those words are downstream of observation of actual cats; if I’d been so wrong as to expect Adagio will like oranges, googling probably could have set me right.”
I feel torn on casting reading/hearing another’s words as being any closer to the territory than imagining, since it doesn’t actually provide an opportunity to encounter the ‘resistance of reality’. (Instead, it seems like straightforwardly an updating-my-map-with-your-map activity?)
That said… you can update your map using someone else’s map so that it’s easier to actually encounter something in the territory. (For example, you could’ve gone your whole life without offering an orange to a cat; but you did try, you moved to encounter that territory, ~because someone shared a map that said you would see a particular thing (cat-upset) at a particular place (the interaction between cat and orange)).
There is a sort of… temporal closeness there?
I feel like something important is solidifying for me here; something about how maps are actually useful as maps, as in they might point you toward the appropriate sections of territory for whatever it is you want to look at. Before I was just thinking of ‘maps’ as a useful metaphor for abstraction. Hm.
also… something something the ability to manipulate the territory as a sign that you are maximally close to it?
I typed that, then switched to reading A Process Model by Gendlin, and the following sentences popped out at me:
“As ongoing interaction with the world, our bodies are also a bodily knowing of the world. It is what Merleau-Ponty calls ‘the knowing body’ (le corps connais- sant). Such knowing consists in much more than external observation; it is the body always already interacting directly with the situation it finds itself in.”
The (seeming) relevance made me laugh.
Where does ‘interaction’ fit in all of this anyway?
Logan:
it somehow fits into the heart of deep mastery [from “Knowing”]
Where does ‘interaction’ fit in all of this anyway?
Logan:
it somehow fits into the heart of deep mastery [from “Knowing”]
Ooh huh hmmm!
I had missed this before, but… I think achieving deep mastery is actually not the goal of {the part of my work I consider most important}. Or, to be more precise, it’s not the job of this part of my work to produce deep mastery. I think.
(The Knowing article describes deep mastery as “extensive familiarity, lots of factual knowledge, rich predictive and explanatory models, and also practical mastery in a wide variety of situations”.)
The job of this part of my work is to make contact at all, and to nurture this contact just enough that it becomes possible to deepen that contact with more ordinary methods, like actual mathematical models. (Which are also an important part of my work, but do not as much seem like the bottleneck.) This part of my work isn’t supposed to produce extensive familiarity, lots of factual knowledge, etc.
Metaphorically, it’s like an expedition that travels deep into a jungle trying to find a viable route for a road, or something. They never see the actual road–once they’ve just marked off where the road may one day go, they move on to the next project. Their work is actually quite different from that of the people coming in after, who cut the trees and build the bridges and pave the road. Those latter people always have the road behind them which connects them to civilization, so they can truck in supplies and it’s basically a normal construction job, if one at the frontier. The expedition people are on their own, and can’t carry enough food to last the whole expedition, so they need to live off the jungle.
i think this sequence is probably meant as a letter to aspiring rationalists in particular. to some extent, it’s like, “look if you’re trying to learn rationality and you’re not using methods that are aimed at deep mastery then you are doing it wrong”.
There’s a piece I think you’re missing with respect to maps/territory and math, which is what I’ll call the correspondence between the map and the territory. I’m surprised I haven’t this discussed on LR.
When you hold a literal map, there’s almost always only one correct way to hold it: North is North, you are here. But there are often multiple ways to hold a metaphorical map, at least if the map is math. To describe how to hold a map, you would say which features on the map correspond to which features in the territory. For example:
For a literal map, a correspondence would be fully described (I think) by (i) where you currently are on the map, (ii) which way is up, and (iii) what the scale of the map is. And also, if it’s not clear, what the marks on the map are trying to represent (e.g. “those are contour lines” or “that’s a badly drawn tree, sorry” or “no that sea serpent on that old map of the sea is just decoration”). This correspondence is almost always unique.
For the Addition map, the features on the map are (i) numbers and (ii) plus, so a correspondence has to say (i) what a number such as 2 means and (ii) what addition means. For example, you could measure fuel efficiency either in miles per gallon or gallons per mile. This gives two different correspondences between “addition on the positive reals” and “fuel efficiencies”, but “+” in the two correspondences means very different things. And this is just for fuel efficiency; there are a lot of correspondences of the Addition map.
The Sleeping Beauty paradox is a paradoxical because it describes an unusual situation in which there are two different but perfectly accurate correspondences between probability theory and the (same) situation.
Even Logic has multiple correspondences. ”∀x.ϕ and “∃x.ϕ” mean in various correspondences: (i) ”ϕ holds for every x in this model” and ”ϕ holds for some x in this model”; or (ii) “I win the two-player game in which I want to make ϕ be true and you get to pick the value of x right now” and “I win the two-player game in which I want to make ϕ be true and I get the pick the value of x right now”; or (iii) Something about senders and receivers in the pi-calculus.
Maybe “correspondence” should be “interpretation”? Surely someone has talked about this, formally even, but I haven’t seen it.
On the map/territory distinction for math, I feel like a formal system instantiates a territory, operating on the system maps that territory, and correspondences between the system and things outside it are map-like.
the conversation with robin you quoted did feel relevant, but the parent comment felt like it was too focused in on math and thereby somewhat orthogonal to or missing the point of what i was trying to figure out. (the real thing i’m interested in isn’t even about math but about philosophical intuitions.)
this made me want to try to say the thing differently, this time using the concept of gears-level models:
(maybe everything i’m saying below is obvious already, but then again, maybe it’ll help.)
suppose that you are looking at a drawing of a simple mechanism, like the second image in the article above, which i’ll try to reproduce here:
if the drawing is detailed enough and you are able to understand the mechanism well enough, then you can reason out how the mechanism will behave in different circumstances; in the example, you can figure out that if the gear on the top-left turns counter-clockwise, the gear on the bottom-right will turn counter-clockwise as well.
but you haven’t interacted with the mechanism at all! everything you’ve done has happened inside your map!
nevertheless, if you understand how gears work and you look at the drawing and think in detail about how each gear will turn, your model resists the idea that the bottom-right gear can turn clockwise while the top-left one turns counter-clockwise. it might be that your model of how gears work is wrong, or it might be that the drawing doesn’t accurately represent how the mechanism works, or you might be misunderstanding it, or you might be making a mistake while thinking about it. but if none of these is true, and the top-left gear turns counter-clockwise, then the bottom-right gear has to turn counter-clockwise as well.
when you work out from the drawing how the bottom-right gear has to turn, are you in contact, not just with any part of the territory, but with the part of the territory that is the actual physical gears? even though you are not physically interacting with the actual gears at all, just thinking about a map of the gears?
the way i’m thinking about it in the top-level comment is that since this process is able to resist misconceptions you may have, and is thereby able to bring your {anticipations / implicit models} about the physical gears more in line with reality, therefore yes it is “contact with that part of the territory” in the sense that is relevant to “knowledge of the territory takes patient and direct contact with the territory.”
i should note that the thing i’m ultimately interested in, namely the way i use philosophical intuitions in my work on agi alignment, isn’t even anywhere as detailed as a gears-level model. nevertheless, i still think that these intuitions cling sufficiently tightly to the territory that this work is well worth doing. in the ontology of my top-level comment, my work is betting on these intuitions being good enough to be able to resist and correct my implicit models of agi alignment, and to therefore constitute significant contact with this region of the territory.
something i don’t know how to reflect well in a comment like this, and think i should say explicitly, is that the game i’m playing here is not just to find a version of logan’s sentence that covers the kind of work i do. it is to find a version that does that and additionally does not lose what i thought i understood when i was taking “contact with the territory” to be the opposite of “it all happens in your map”, and therefore would have taken {thinking about a drawing of the mechanism} as not being in contact with the territory, since it consists entirely of thinking about a map.
for some reason i haven’t really figured out yet, it seemed really important for this to say that in order to be “contact with the territory”, an experience has to be able to resist and correct my {anticipations / implicit models}, not just my explicit verbal models.
(i tried to say some more things about this here, but apparently it needs to gestate more first. it wouldn’t be very surprising if i ended up later disagreeing with my current model, since it’s not even particularly clear to me yet.)
>for some reason i haven’t really figured out yet, it seemed really important for this to say that in order to be “contact with the territory”, an experience has to be able to resist and correct my {anticipations / implicit models}, not just my explicit verbal models.
yes i absolutely agree, and i think this intuition that we share (...or, the apparent similarity between our two intuitions?) is a lot of what’s behind the Knowing essay. something something deep mastery.
i’m only saying one thing in this whole essay series, just from a bunch of different angles, and this bit of your comment for sure picks out one of the angles.
or maybe more importantly, if you’re trying to develop rationality, as an art, to be practiced by a community whose actions matter, and you aren’t somehow aimed at deep mastery, then you’re doing it wrong.
[edit: oh i think i somehow accidentally put this in the wrong sub thread]
but the parent comment felt like it was too focused in on math
er, sorry, too focused in on math for it to help me with the thing i’m trying to figure out, in a way i was quickly able to recognize, i meant. i didn’t mean to assert that it was just too focused in on math for a comment, in some generic purpose-independent way! 😛
I had a different conversation with Robin in the draft documents that I think was very relevant, but I can’t find that one either. Blerg.
Anyway, “the thing that resists expectation” is my current best way of identifying “the territory”, at least inside my own head. This has been true since gyroscopes and my study of fabricated options. I think the takeaway from that conversation I can’t find was something like, “The ease with which your incorrect expectations encounter resistance-that-you-perceive-as-resistance corresponds to the directness of your observation.” Which sounds to me a lot like what you’re saying here.
[I’m just gonna explicitly flag here that my confidence throughout this comment goes down, and my feelings of confusion and other signs that I’m swimming in pre-theoretic soup go up. I’d be surprised to find in a year that nothing I say here is outright false, even the things that are descriptions of my immediate experiences.]
I currently expect that ultimately, my concept of “the territory” needs to be one where math absolutely is a study of the territory, and if I’m using a version of the “territory” concept that lacks that property, I don’t have it right yet.
I’m not sure about math in general, but according to my current understanding, logic is is a study of maps. Uh, hm, I can’t belief report the previous sentence. Let’s try again: According to my current understanding, formal semantics, as a field, is a study of maps. No that’s not quite right either. (My understanding of) formal semantics is like, “We have these rules. What systems obey these rules? If some part of the world obeyed these rules, what might that part of the world look like?” So formal semantics makes maps of a special kind, a kind that follow sets of logical rules. It’s sort of backwards cartography. Instead of trying to draw pictures of the world, formal semantics is trying to draw pictures of how the world would be if it had certain properties. Like illustrating a novel, but for math kids instead of art kids.
Logic itself… Is chess a map? Is it even a drawing? I… do not think so, no! A logic is a set of formulation and derivation rules, and what a logician does is study the behaviors of that collection of entities. A logician is like, “How do the thingies behave if you swap this derivation rule for that one? What are the relationships between the old set and the new set? What happens if this particular bananas connective gets to be part of a well formed formula? Let’s find out!” (Maybe. I’m obviously talking about myself. I don’t know what real logicians do.) Maybe if you’re some kind of applied logician or something, you spend a lot of time trying to find logics that govern whatever bit of the world you’re interested in, like databases or something, and then you spend a bunch of time making maps so you can study the relationship of the logic and the map. (Is that what programmers are? Applied logicians in this sense?) But my point (it turns out) is that logicians are not mainly working with maps, any more than basket weavers are.
I’m even less clear on what math is than on what logic is, but my only attempts to understand what math is that I’ve ever been at all satisfied with have lead me to think of it as a special case of logic. Math is what happens when you pick a logic, then pick a set of assumptions, then just leave those foundational assumptions in place forever until you realize there’s something you don’t like about them, at which point maybe you lead a revolt and start a new religion where you’re not just trivially allowed to pick out the smallest number from each of a set of sets of natural numbers, or whatever. And if math is some kind of special case of logic, then, seems like it doesn’t have any more to do with maps than basket weaving does, either?
Anyway, perhaps that was all nonsense, I really have no idea. Hope some of it bumps into you in useful ways.
I found the conversation with Robin. (Or, rather, Robin found the conversation. Thanks, Robin.)
Robin: [shared with permission]
[talking about a bit I was trying to re-write in the next essay, Direct Observation]
First paragraph feels great.
the the last line, though, “The processing distance is finite” immediately raises the question of “WOAH what would infinite / extremely large processing distance be...?” and I’m off thinking about that.
Second paragraph: makes sense, and I agree with you, but I have some expectation of… explanation? You’ve stated that the processing distance is shorter, but without further words on how you know / what that’s made out of.
(And how does a person know? By poking something and examining the sensation, and comparing it to the sensation of planning something? It might be a good moment to prompt the reader to try it and examine the difference?)
Logan:
I tried to write about this, but it was long and clumsy. I’m not sure if I’ll take a second pass at it, if I’ll just leave it out (especially since it’s a pretty new and not-very-investigated line of thought for me). But I thought you’d like to see what I have:
Here’s a test for estimating processing distance: If your expectations happened to be incorrect, how easy would it be to feel the resistance of reality?
I have some expectations around what kinds of foods a cat will enjoy. If you’d asked me two weeks ago, “What treats should I offer my cat while helping him become more comfortable around vacuum cleaners?” I’d recommend little chunks of fresh chicken, beef, or seafood. That was my implicit belief for many years, while I did not actually know any cats.
Recently, while preparing to adopt a kitten, I looked into this question, and multiple websites confirmed that super high-value treats for cats include fresh poultry, meat, and seafood.
But when I actually got the kitten, reality resisted my expectations by literally spitting out chunks of fresh chicken, grilled steak, and shrimp, right in front of me. It would have been hard for me not to notice this happening. It may be true that cats in general enjoy fresh chicken; but it was immediately apparent to me, as I watched Adagio literally turn up his nose at the tasty morsel, that this one does not.
If I’d never offered Adagio chicken, I could have gone on believing indefinitely that he likes chicken. There would have been no opportunity to encounter resistance through my imagination, through books by cat experts, even through personal interactions with other cats. It was only by offering chicken directly to my real-life chicken-hating cat that I gained the opportunity to notice I was wrong.
When I imagined Adagio eating chicken, my imagination filled in a picture of him munching eagerly, even trying to swipe additional bits from my hand. The processing distances between cat palates and human imaginings of cats is quite large, so there’s little opportunity for expectation to encounter resistance. When I read accounts of cats, the distance is a bit less, because those words are downstream of observation of actual cats; if I’d been so wrong as to expect Adagio will like oranges, googling probably could have set me right. Cats supposedly hate citrus. (I offered him an orange just now, to be sure. He got up and backed away from it.)
But even if my priors are strong, and even if I’m very stubborn or invested in my expectations—if I’ve just spent a lot of money on fancy cat food featuring fresh chicken, for instance—I’d have to work pretty hard not to notice my surprise when I offer an actual chicken-hating cat an actual piece of chicken from my own hand.
[end attempted essay section, begin talking directly to Robin again]
so to answer your question, according to my working model, infinite processing distances are those that offer no opportunity whatsoever to encounter resistance from reality when your expectations are incorrect. the distance between snakes and cello sonatas is extremely large, because snakes don’t have ears. (not a perfect example, since they do have some other methods of sensing vibrations.) if a snake is somehow mistaken about the properties of a cello sonata, it will have to work extremely hard to even find an opportunity to discover this, nevermind making use of that opportunity. it might have to, i don’t know, learn to read musical notation, and even then a lot of potentially crucial information (like the overtones created by the shape of the cello body, or a particular artist’s interpretation) will not be available.
(i don’t know why i’m talking about snakes. deaf humans also exist.)
so it’s like, how tightly entangled is any given experience with the bit of reality you’re hoping to learn about from that experience? the totally sound-free experience of smelling a tasty cricket in front of your tongue is not very entangled with a musical performance, even if the performance is happening just meters away from you. a scratchy recording of that performance on an ancient beaten-up record heard decades later on the other side of the planet, much more so.
Robin:
(I did want to see this, yes!)
>”Here’s a test for estimating processing distance: ‘If your expectations happened to be incorrect, how easy would it be to feel the resistance of reality?’”
Oh, I like this. (Though, “the resistance of reality” throws a little ‘?’ For me when I look at it too closely; I can tell my felt-sense for “noticing surprise/confusion” is bubbling up to take the space of the phrase when I don’t look too closely, though, so I must think you mean something like that.)
>”When I read accounts of cats, the distance is a bit less, because those words are downstream of observation of actual cats; if I’d been so wrong as to expect Adagio will like oranges, googling probably could have set me right.”
I feel torn on casting reading/hearing another’s words as being any closer to the territory than imagining, since it doesn’t actually provide an opportunity to encounter the ‘resistance of reality’. (Instead, it seems like straightforwardly an updating-my-map-with-your-map activity?)
That said… you can update your map using someone else’s map so that it’s easier to actually encounter something in the territory. (For example, you could’ve gone your whole life without offering an orange to a cat; but you did try, you moved to encounter that territory, ~because someone shared a map that said you would see a particular thing (cat-upset) at a particular place (the interaction between cat and orange)).
There is a sort of… temporal closeness there?
I feel like something important is solidifying for me here; something about how maps are actually useful as maps, as in they might point you toward the appropriate sections of territory for whatever it is you want to look at. Before I was just thinking of ‘maps’ as a useful metaphor for abstraction. Hm.
also… something something the ability to manipulate the territory as a sign that you are maximally close to it?
I typed that, then switched to reading A Process Model by Gendlin, and the following sentences popped out at me:
“As ongoing interaction with the world, our bodies are also a bodily knowing of the world. It is what Merleau-Ponty calls ‘the knowing body’ (le corps connais- sant). Such knowing consists in much more than external observation; it is the body always already interacting directly with the situation it finds itself in.”
The (seeming) relevance made me laugh.
Where does ‘interaction’ fit in all of this anyway?
Logan:
it somehow fits into the heart of deep mastery [from “Knowing”]
[end of my conversation with Robin]
Ooh huh hmmm!
I had missed this before, but… I think achieving deep mastery is actually not the goal of {the part of my work I consider most important}. Or, to be more precise, it’s not the job of this part of my work to produce deep mastery. I think.
(The Knowing article describes deep mastery as “extensive familiarity, lots of factual knowledge, rich predictive and explanatory models, and also practical mastery in a wide variety of situations”.)
The job of this part of my work is to make contact at all, and to nurture this contact just enough that it becomes possible to deepen that contact with more ordinary methods, like actual mathematical models. (Which are also an important part of my work, but do not as much seem like the bottleneck.) This part of my work isn’t supposed to produce extensive familiarity, lots of factual knowledge, etc.
Metaphorically, it’s like an expedition that travels deep into a jungle trying to find a viable route for a road, or something. They never see the actual road–once they’ve just marked off where the road may one day go, they move on to the next project. Their work is actually quite different from that of the people coming in after, who cut the trees and build the bridges and pave the road. Those latter people always have the road behind them which connects them to civilization, so they can truck in supplies and it’s basically a normal construction job, if one at the frontier. The expedition people are on their own, and can’t carry enough food to last the whole expedition, so they need to live off the jungle.
That… probably explains some of my confusion.
yeah that makes sense.
i think this sequence is probably meant as a letter to aspiring rationalists in particular. to some extent, it’s like, “look if you’re trying to learn rationality and you’re not using methods that are aimed at deep mastery then you are doing it wrong”.
There’s a piece I think you’re missing with respect to maps/territory and math, which is what I’ll call the correspondence between the map and the territory. I’m surprised I haven’t this discussed on LR.
When you hold a literal map, there’s almost always only one correct way to hold it: North is North, you are here. But there are often multiple ways to hold a metaphorical map, at least if the map is math. To describe how to hold a map, you would say which features on the map correspond to which features in the territory. For example:
For a literal map, a correspondence would be fully described (I think) by (i) where you currently are on the map, (ii) which way is up, and (iii) what the scale of the map is. And also, if it’s not clear, what the marks on the map are trying to represent (e.g. “those are contour lines” or “that’s a badly drawn tree, sorry” or “no that sea serpent on that old map of the sea is just decoration”). This correspondence is almost always unique.
For the Addition map, the features on the map are (i) numbers and (ii) plus, so a correspondence has to say (i) what a number such as 2 means and (ii) what addition means. For example, you could measure fuel efficiency either in miles per gallon or gallons per mile. This gives two different correspondences between “addition on the positive reals” and “fuel efficiencies”, but “+” in the two correspondences means very different things. And this is just for fuel efficiency; there are a lot of correspondences of the Addition map.
The Sleeping Beauty paradox is a paradoxical because it describes an unusual situation in which there are two different but perfectly accurate correspondences between probability theory and the (same) situation.
Even Logic has multiple correspondences. ”∀x.ϕ and “∃x.ϕ” mean in various correspondences: (i) ”ϕ holds for every x in this model” and ”ϕ holds for some x in this model”; or (ii) “I win the two-player game in which I want to make ϕ be true and you get to pick the value of x right now” and “I win the two-player game in which I want to make ϕ be true and I get the pick the value of x right now”; or (iii) Something about senders and receivers in the pi-calculus.
Maybe “correspondence” should be “interpretation”? Surely someone has talked about this, formally even, but I haven’t seen it.
On the map/territory distinction for math, I feel like a formal system instantiates a territory, operating on the system maps that territory, and correspondences between the system and things outside it are map-like.
the conversation with robin you quoted did feel relevant, but the parent comment felt like it was too focused in on math and thereby somewhat orthogonal to or missing the point of what i was trying to figure out. (the real thing i’m interested in isn’t even about math but about philosophical intuitions.)
this made me want to try to say the thing differently, this time using the concept of gears-level models:
https://www.lesswrong.com/posts/B7P97C27rvHPz3s9B/gears-in-understanding
(maybe everything i’m saying below is obvious already, but then again, maybe it’ll help.)
suppose that you are looking at a drawing of a simple mechanism, like the second image in the article above, which i’ll try to reproduce here:
if the drawing is detailed enough and you are able to understand the mechanism well enough, then you can reason out how the mechanism will behave in different circumstances; in the example, you can figure out that if the gear on the top-left turns counter-clockwise, the gear on the bottom-right will turn counter-clockwise as well.
but you haven’t interacted with the mechanism at all! everything you’ve done has happened inside your map!
nevertheless, if you understand how gears work and you look at the drawing and think in detail about how each gear will turn, your model resists the idea that the bottom-right gear can turn clockwise while the top-left one turns counter-clockwise. it might be that your model of how gears work is wrong, or it might be that the drawing doesn’t accurately represent how the mechanism works, or you might be misunderstanding it, or you might be making a mistake while thinking about it. but if none of these is true, and the top-left gear turns counter-clockwise, then the bottom-right gear has to turn counter-clockwise as well.
when you work out from the drawing how the bottom-right gear has to turn, are you in contact, not just with any part of the territory, but with the part of the territory that is the actual physical gears? even though you are not physically interacting with the actual gears at all, just thinking about a map of the gears?
the way i’m thinking about it in the top-level comment is that since this process is able to resist misconceptions you may have, and is thereby able to bring your {anticipations / implicit models} about the physical gears more in line with reality, therefore yes it is “contact with that part of the territory” in the sense that is relevant to “knowledge of the territory takes patient and direct contact with the territory.”
“a map that clings super tightly to the territory” is a phrase from duncan’s reply to my comment on “The Territory” which seems to me to describe gears-level models well.
– # –
i should note that the thing i’m ultimately interested in, namely the way i use philosophical intuitions in my work on agi alignment, isn’t even anywhere as detailed as a gears-level model. nevertheless, i still think that these intuitions cling sufficiently tightly to the territory that this work is well worth doing. in the ontology of my top-level comment, my work is betting on these intuitions being good enough to be able to resist and correct my implicit models of agi alignment, and to therefore constitute significant contact with this region of the territory.
something i don’t know how to reflect well in a comment like this, and think i should say explicitly, is that the game i’m playing here is not just to find a version of logan’s sentence that covers the kind of work i do. it is to find a version that does that and additionally does not lose what i thought i understood when i was taking “contact with the territory” to be the opposite of “it all happens in your map”, and therefore would have taken {thinking about a drawing of the mechanism} as not being in contact with the territory, since it consists entirely of thinking about a map.
for some reason i haven’t really figured out yet, it seemed really important for this to say that in order to be “contact with the territory”, an experience has to be able to resist and correct my {anticipations / implicit models}, not just my explicit verbal models.
(i tried to say some more things about this here, but apparently it needs to gestate more first. it wouldn’t be very surprising if i ended up later disagreeing with my current model, since it’s not even particularly clear to me yet.)
>for some reason i haven’t really figured out yet, it seemed really important for this to say that in order to be “contact with the territory”, an experience has to be able to resist and correct my {anticipations / implicit models}, not just my explicit verbal models.
yes i absolutely agree, and i think this intuition that we share (...or, the apparent similarity between our two intuitions?) is a lot of what’s behind the Knowing essay. something something deep mastery.
i’m only saying one thing in this whole essay series, just from a bunch of different angles, and this bit of your comment for sure picks out one of the angles.
or maybe more importantly, if you’re trying to develop rationality, as an art, to be practiced by a community whose actions matter, and you aren’t somehow aimed at deep mastery, then you’re doing it wrong.
[edit: oh i think i somehow accidentally put this in the wrong sub thread]
er, sorry, too focused in on math for it to help me with the thing i’m trying to figure out, in a way i was quickly able to recognize, i meant. i didn’t mean to assert that it was just too focused in on math for a comment, in some generic purpose-independent way! 😛