the conversation with robin you quoted did feel relevant, but the parent comment felt like it was too focused in on math and thereby somewhat orthogonal to or missing the point of what i was trying to figure out. (the real thing i’m interested in isn’t even about math but about philosophical intuitions.)
this made me want to try to say the thing differently, this time using the concept of gears-level models:
(maybe everything i’m saying below is obvious already, but then again, maybe it’ll help.)
suppose that you are looking at a drawing of a simple mechanism, like the second image in the article above, which i’ll try to reproduce here:
if the drawing is detailed enough and you are able to understand the mechanism well enough, then you can reason out how the mechanism will behave in different circumstances; in the example, you can figure out that if the gear on the top-left turns counter-clockwise, the gear on the bottom-right will turn counter-clockwise as well.
but you haven’t interacted with the mechanism at all! everything you’ve done has happened inside your map!
nevertheless, if you understand how gears work and you look at the drawing and think in detail about how each gear will turn, your model resists the idea that the bottom-right gear can turn clockwise while the top-left one turns counter-clockwise. it might be that your model of how gears work is wrong, or it might be that the drawing doesn’t accurately represent how the mechanism works, or you might be misunderstanding it, or you might be making a mistake while thinking about it. but if none of these is true, and the top-left gear turns counter-clockwise, then the bottom-right gear has to turn counter-clockwise as well.
when you work out from the drawing how the bottom-right gear has to turn, are you in contact, not just with any part of the territory, but with the part of the territory that is the actual physical gears? even though you are not physically interacting with the actual gears at all, just thinking about a map of the gears?
the way i’m thinking about it in the top-level comment is that since this process is able to resist misconceptions you may have, and is thereby able to bring your {anticipations / implicit models} about the physical gears more in line with reality, therefore yes it is “contact with that part of the territory” in the sense that is relevant to “knowledge of the territory takes patient and direct contact with the territory.”
i should note that the thing i’m ultimately interested in, namely the way i use philosophical intuitions in my work on agi alignment, isn’t even anywhere as detailed as a gears-level model. nevertheless, i still think that these intuitions cling sufficiently tightly to the territory that this work is well worth doing. in the ontology of my top-level comment, my work is betting on these intuitions being good enough to be able to resist and correct my implicit models of agi alignment, and to therefore constitute significant contact with this region of the territory.
something i don’t know how to reflect well in a comment like this, and think i should say explicitly, is that the game i’m playing here is not just to find a version of logan’s sentence that covers the kind of work i do. it is to find a version that does that and additionally does not lose what i thought i understood when i was taking “contact with the territory” to be the opposite of “it all happens in your map”, and therefore would have taken {thinking about a drawing of the mechanism} as not being in contact with the territory, since it consists entirely of thinking about a map.
for some reason i haven’t really figured out yet, it seemed really important for this to say that in order to be “contact with the territory”, an experience has to be able to resist and correct my {anticipations / implicit models}, not just my explicit verbal models.
(i tried to say some more things about this here, but apparently it needs to gestate more first. it wouldn’t be very surprising if i ended up later disagreeing with my current model, since it’s not even particularly clear to me yet.)
>for some reason i haven’t really figured out yet, it seemed really important for this to say that in order to be “contact with the territory”, an experience has to be able to resist and correct my {anticipations / implicit models}, not just my explicit verbal models.
yes i absolutely agree, and i think this intuition that we share (...or, the apparent similarity between our two intuitions?) is a lot of what’s behind the Knowing essay. something something deep mastery.
i’m only saying one thing in this whole essay series, just from a bunch of different angles, and this bit of your comment for sure picks out one of the angles.
or maybe more importantly, if you’re trying to develop rationality, as an art, to be practiced by a community whose actions matter, and you aren’t somehow aimed at deep mastery, then you’re doing it wrong.
[edit: oh i think i somehow accidentally put this in the wrong sub thread]
but the parent comment felt like it was too focused in on math
er, sorry, too focused in on math for it to help me with the thing i’m trying to figure out, in a way i was quickly able to recognize, i meant. i didn’t mean to assert that it was just too focused in on math for a comment, in some generic purpose-independent way! 😛
the conversation with robin you quoted did feel relevant, but the parent comment felt like it was too focused in on math and thereby somewhat orthogonal to or missing the point of what i was trying to figure out. (the real thing i’m interested in isn’t even about math but about philosophical intuitions.)
this made me want to try to say the thing differently, this time using the concept of gears-level models:
https://www.lesswrong.com/posts/B7P97C27rvHPz3s9B/gears-in-understanding
(maybe everything i’m saying below is obvious already, but then again, maybe it’ll help.)
suppose that you are looking at a drawing of a simple mechanism, like the second image in the article above, which i’ll try to reproduce here:
if the drawing is detailed enough and you are able to understand the mechanism well enough, then you can reason out how the mechanism will behave in different circumstances; in the example, you can figure out that if the gear on the top-left turns counter-clockwise, the gear on the bottom-right will turn counter-clockwise as well.
but you haven’t interacted with the mechanism at all! everything you’ve done has happened inside your map!
nevertheless, if you understand how gears work and you look at the drawing and think in detail about how each gear will turn, your model resists the idea that the bottom-right gear can turn clockwise while the top-left one turns counter-clockwise. it might be that your model of how gears work is wrong, or it might be that the drawing doesn’t accurately represent how the mechanism works, or you might be misunderstanding it, or you might be making a mistake while thinking about it. but if none of these is true, and the top-left gear turns counter-clockwise, then the bottom-right gear has to turn counter-clockwise as well.
when you work out from the drawing how the bottom-right gear has to turn, are you in contact, not just with any part of the territory, but with the part of the territory that is the actual physical gears? even though you are not physically interacting with the actual gears at all, just thinking about a map of the gears?
the way i’m thinking about it in the top-level comment is that since this process is able to resist misconceptions you may have, and is thereby able to bring your {anticipations / implicit models} about the physical gears more in line with reality, therefore yes it is “contact with that part of the territory” in the sense that is relevant to “knowledge of the territory takes patient and direct contact with the territory.”
“a map that clings super tightly to the territory” is a phrase from duncan’s reply to my comment on “The Territory” which seems to me to describe gears-level models well.
– # –
i should note that the thing i’m ultimately interested in, namely the way i use philosophical intuitions in my work on agi alignment, isn’t even anywhere as detailed as a gears-level model. nevertheless, i still think that these intuitions cling sufficiently tightly to the territory that this work is well worth doing. in the ontology of my top-level comment, my work is betting on these intuitions being good enough to be able to resist and correct my implicit models of agi alignment, and to therefore constitute significant contact with this region of the territory.
something i don’t know how to reflect well in a comment like this, and think i should say explicitly, is that the game i’m playing here is not just to find a version of logan’s sentence that covers the kind of work i do. it is to find a version that does that and additionally does not lose what i thought i understood when i was taking “contact with the territory” to be the opposite of “it all happens in your map”, and therefore would have taken {thinking about a drawing of the mechanism} as not being in contact with the territory, since it consists entirely of thinking about a map.
for some reason i haven’t really figured out yet, it seemed really important for this to say that in order to be “contact with the territory”, an experience has to be able to resist and correct my {anticipations / implicit models}, not just my explicit verbal models.
(i tried to say some more things about this here, but apparently it needs to gestate more first. it wouldn’t be very surprising if i ended up later disagreeing with my current model, since it’s not even particularly clear to me yet.)
>for some reason i haven’t really figured out yet, it seemed really important for this to say that in order to be “contact with the territory”, an experience has to be able to resist and correct my {anticipations / implicit models}, not just my explicit verbal models.
yes i absolutely agree, and i think this intuition that we share (...or, the apparent similarity between our two intuitions?) is a lot of what’s behind the Knowing essay. something something deep mastery.
i’m only saying one thing in this whole essay series, just from a bunch of different angles, and this bit of your comment for sure picks out one of the angles.
or maybe more importantly, if you’re trying to develop rationality, as an art, to be practiced by a community whose actions matter, and you aren’t somehow aimed at deep mastery, then you’re doing it wrong.
[edit: oh i think i somehow accidentally put this in the wrong sub thread]
er, sorry, too focused in on math for it to help me with the thing i’m trying to figure out, in a way i was quickly able to recognize, i meant. i didn’t mean to assert that it was just too focused in on math for a comment, in some generic purpose-independent way! 😛