It’s totally valid to take a perspective in which an AI trained to play Tetris “doesn’t want to play good Tetris, it just searches for plans that correspond to good Tetris.”
Or even that an AI trained to navigate and act in the real world “doesn’t want to navigate the real world, it just searches for plans that do useful real-world things.”
But it’s also a valid perspective to say “you know, the AI that’s trained to navigate the real world really does want the things it searches for plans to achieve.” It’s just semantics in the end.
But! Be careful about switching perspectives without realizing it. When you take one perspective on an AI, and you want to compare it to a human, you should keep applying that same perspective!
From the perspective where the real-world-navigating AI doesn’t really want things, humans don’t really want things either. They’re merely generating a series of outputs that they think will constitute a good plan for moving their bodies.
Everything is a matter of perspective.
It’s totally valid to take a perspective in which an AI trained to play Tetris “doesn’t want to play good Tetris, it just searches for plans that correspond to good Tetris.”
Or even that an AI trained to navigate and act in the real world “doesn’t want to navigate the real world, it just searches for plans that do useful real-world things.”
But it’s also a valid perspective to say “you know, the AI that’s trained to navigate the real world really does want the things it searches for plans to achieve.” It’s just semantics in the end.
But! Be careful about switching perspectives without realizing it. When you take one perspective on an AI, and you want to compare it to a human, you should keep applying that same perspective!
From the perspective where the real-world-navigating AI doesn’t really want things, humans don’t really want things either. They’re merely generating a series of outputs that they think will constitute a good plan for moving their bodies.
Thanks, that’s a really helpful framing!