If I understand both your and shiminux’s comments, this might express the same thing in different terms:
We have experiences (“inputs”.)
We wish to optimize these inputs according to whatever goal structure.
In order to do this, we need to construct models to predict how our actions effect future inputs, based on patterns in how inputs have behaved in the past.
Some of these models are more accurate than others. We might call accurate models “real”.
However, the term “real” holds no special ontological value, and they might later prove inaccurate or be replaced by better models.
Thus, we have a perfectly functioning agent with no conception (or need for) a territory—there is only the map and the inputs. Technically, you could say the inputs are the territory, but the metaphor isn’t very useful for such an agent.
Huh, looks like we are, while not in agreement, at least speaking the same language. Not sure how Dave managed to accomplish this particular near-magical feat.
As before, I mostly attribute it to the usefulness of trying to understand what other people are saying.
I find it’s much more difficult to express my own positions in ways that are easily understood, though. It’s harder to figure out what is salient and where the vastest inferential gulfs are.
You might find it correspondingly useful to try and articulate the realist position as though you were trying to explain it to a fellow instrumentalist who had no experience with realists.
You might find it correspondingly useful to try and articulate the realist position as though you were trying to explain it to a fellow instrumentalist who had no experience with realists.
I actually tried this a few times, even started a post draft titled “explain realism to a baby AI”. In fact, I keep fighting my own realist intuition every time I don the instrumentalist hat. But maybe I am not doing it well enough.
Ah. Yeah, if your intuitions are realist, I expect it suffers from the same problem as expressing my own positions. It may be a useful exercise in making your realist intuitions explicit, though.
Maybe we should organize a discussion where everyone has to take positions other than their own? If this really helps clarity (and I think it does) it could end up producing insights much more difficult (if not actually impossible) to reach with normal discussion.
(Plus it would be good practice at the Ideological Turing Test, generalized empathy skills, avoiding the antpattern of demonizing the other side, and avoiding steelmanning arguments into forms that don’t threaten your own arguments (since they would be threatening the other side’s arguments, as it were.))
Maybe we should organize a discussion where everyone has to take positions other than their own?
It seems to me to be one of the basic exercises in rationality, also known as “Devil’s advocate”. However, Eliezer dislikes it for some reason, probably because he thinks that it’s too easy to do poorly and then dismiss with a metaphorical self-congratulatory pat on one’s own back. Not sure how much of this is taught or practiced at CFAR camps.
Yup. In my experience, though, Devil’s Advocates are usually pitted against people genuinely arguing their cause, not other devil’s advocates.
However, Eliezer dislikes it for some reason, probably because he thinks that it’s too easy to do poorly and then dismiss with a metaphorical self-congratulatory pat on one’s own back.
Yeah, I remember being surprised by that reading the equences. He seemed to be describing acting as your own devil’s advocate, though, IIRC.
Well, if any nonrealists want to argue the realist position in response to my articulation of the instrumentalist position, they are certainly welcome to do so, and I can try to continue defending it… though I’m not sure how good a job of it I’ll do.
I was actually thinking of random topics, perhaps ones that are better understood by LW regulars, at least at first. Still …
if any nonrealists want to argue the realist position in response to my articulation of the instrumentalist position, they are certainly welcome to do so
Wait, there are nonrealists other than shiminux here?
If I understand both your and shiminux’s comments, this might express the same thing in different terms:
We have experiences (“inputs”.)
We wish to optimize these inputs according to whatever goal structure.
In order to do this, we need to construct models to predict how our actions effect future inputs, based on patterns in how inputs have behaved in the past.
Some of these models are more accurate than others. We might call accurate models “real”.
However, the term “real” holds no special ontological value, and they might later prove inaccurate or be replaced by better models.
Thus, we have a perfectly functioning agent with no conception (or need for) a territory—there is only the map and the inputs. Technically, you could say the inputs are the territory, but the metaphor isn’t very useful for such an agent.
Huh, looks like we are, while not in agreement, at least speaking the same language. Not sure how Dave managed to accomplish this particular near-magical feat.
As before, I mostly attribute it to the usefulness of trying to understand what other people are saying.
I find it’s much more difficult to express my own positions in ways that are easily understood, though. It’s harder to figure out what is salient and where the vastest inferential gulfs are.
You might find it correspondingly useful to try and articulate the realist position as though you were trying to explain it to a fellow instrumentalist who had no experience with realists.
I actually tried this a few times, even started a post draft titled “explain realism to a baby AI”. In fact, I keep fighting my own realist intuition every time I don the instrumentalist hat. But maybe I am not doing it well enough.
Ah. Yeah, if your intuitions are realist, I expect it suffers from the same problem as expressing my own positions. It may be a useful exercise in making your realist intuitions explicit, though.
You are right. I will give it a go. Just because it’s obvious doesn’t mean it should not be explicit.
Maybe we should organize a discussion where everyone has to take positions other than their own? If this really helps clarity (and I think it does) it could end up producing insights much more difficult (if not actually impossible) to reach with normal discussion.
(Plus it would be good practice at the Ideological Turing Test, generalized empathy skills, avoiding the antpattern of demonizing the other side, and avoiding steelmanning arguments into forms that don’t threaten your own arguments (since they would be threatening the other side’s arguments, as it were.))
It seems to me to be one of the basic exercises in rationality, also known as “Devil’s advocate”. However, Eliezer dislikes it for some reason, probably because he thinks that it’s too easy to do poorly and then dismiss with a metaphorical self-congratulatory pat on one’s own back. Not sure how much of this is taught or practiced at CFAR camps.
Yup. In my experience, though, Devil’s Advocates are usually pitted against people genuinely arguing their cause, not other devil’s advocates.
Yeah, I remember being surprised by that reading the equences. He seemed to be describing acting as your own devil’s advocate, though, IIRC.
Well, if any nonrealists want to argue the realist position in response to my articulation of the instrumentalist position, they are certainly welcome to do so, and I can try to continue defending it… though I’m not sure how good a job of it I’ll do.
I was actually thinking of random topics, perhaps ones that are better understood by LW regulars, at least at first. Still …
Wait, there are nonrealists other than shiminux here?
Beats me.
Actually, that’s just the model I was already using. I noticed it was shorter than Dave’s, so I figured it might be useful.