Suppose we start our AI off with the intentional stance, where we have a high-level description of these human objects as agents with desires and plans, beliefs and biases and abilities and limitations.
What I’m thinking when I say we need to “bridge the gap” is that I think if we knew what we were doing, we could stipulate that some set of human button-presses is more aligned with some complicated object “hDesires” than not, and the robot should care about hDesires, where hDesires is the part of the intentional stance description of the physical human that plays the functional role of desires.
Suppose we start our AI off with the intentional stance, where we have a high-level description of these human objects as agents with desires and plans, beliefs and biases and abilities and limitations.
What I’m thinking when I say we need to “bridge the gap” is that I think if we knew what we were doing, we could stipulate that some set of human button-presses is more aligned with some complicated object “hDesires” than not, and the robot should care about hDesires, where hDesires is the part of the intentional stance description of the physical human that plays the functional role of desires.