It’s worth noting that no reference to preferences has yet been made. That’s interesting because it suggests that there are both 0P-preferences and 1P-preferences. That intuitively makes sense, since I do care about both the actual state of the world, and what kind of experiences I’m having.
Believing in 0P-preferences seems to be a map-territory confusion, an instance of the Tyranny of the Intentional Object. The robot can’t observe the grid in a way that isn’t mediated by its sensors. There’s no way for 0P-statements to enter into the robot’s decision loop, and accordingly act as something the robot can have preferences over, except by routing through 1P-statements. Instead of directly having a 0P-preference for “a square of the grid is red,” the robot would have to have a 1P-preference for “I believe that a square of the grid is red.”
Instead of directly having a 0P-preference for “a square of the grid is red,” the robot would have to have a 1P-preference for “I believe that a square of the grid is red.”
It would be more precise to say the robot would prefer to get evidence which raises its degree of belief that a square of the grid is red.
Believing in 0P-preferences seems to be a map-territory confusion, an instance of the Tyranny of the Intentional Object. The robot can’t observe the grid in a way that isn’t mediated by its sensors. There’s no way for 0P-statements to enter into the robot’s decision loop, and accordingly act as something the robot can have preferences over, except by routing through 1P-statements. Instead of directly having a 0P-preference for “a square of the grid is red,” the robot would have to have a 1P-preference for “I believe that a square of the grid is red.”
It would be more precise to say the robot would prefer to get evidence which raises its degree of belief that a square of the grid is red.