I think we should be at least mildly concerned about accepting this view of agents in which the agent’s internal information processes are separated by a bright red line from the processes happening in the outside world. Yes I know you accept that they are both grounded in the same physics, and that they interact with one another via ordinary causation, but if you believe that bridging rules are truly inextricable from AI then you really must completely delineate this set of internal information processing phenomena from the external world. Otherwise, if you do not delineate anything, what are you bridging?
So this delineation seems somewhat difficult to remove and I don’t know how to collapse it, but it’s at least worth questioning whether it’s at this point that we should start saying “hmmmm...”
One way to start to probe this question (although this does not come close to resolving the issue) is to think about an AI already in motion. Let’s imagine an AI built out of gears and pulleys, which is busy sensing, optimizing, and acting in the world, as all well-behaved AIs are known to do. In what sense can we delineate a set of “internal information processing phenomena” within this AI from the external world? Perhaps such a delineation would exist in our model of the AI, where it would be expedient indeed to postulate that the gears and pulleys are really just implementing some advanced optimization routine. But that delineation sounds much more like something that should belong in the map than in the territory.
What I’m suggesting is that starting with the assumption of an internal sensory world delineated by a bright red line from the external world should at least give us some pause.
I think we should be at least mildly concerned about accepting this view of agents in which the agent’s internal information processes are separated by a bright red line from the processes happening in the outside world. Yes I know you accept that they are both grounded in the same physics, and that they interact with one another via ordinary causation, but if you believe that bridging rules are truly inextricable from AI then you really must completely delineate this set of internal information processing phenomena from the external world. Otherwise, if you do not delineate anything, what are you bridging?
So this delineation seems somewhat difficult to remove and I don’t know how to collapse it, but it’s at least worth questioning whether it’s at this point that we should start saying “hmmmm...”
One way to start to probe this question (although this does not come close to resolving the issue) is to think about an AI already in motion. Let’s imagine an AI built out of gears and pulleys, which is busy sensing, optimizing, and acting in the world, as all well-behaved AIs are known to do. In what sense can we delineate a set of “internal information processing phenomena” within this AI from the external world? Perhaps such a delineation would exist in our model of the AI, where it would be expedient indeed to postulate that the gears and pulleys are really just implementing some advanced optimization routine. But that delineation sounds much more like something that should belong in the map than in the territory.
What I’m suggesting is that starting with the assumption of an internal sensory world delineated by a bright red line from the external world should at least give us some pause.
-