Those are motivations but they don’t (mostly) have the type signature of “goals” but rather the type signature of “drives”.
I pursue interesting stuff because I’m curious. That doesn’t require me to even have a concept of curiosity—it could in principle be steering me without my awareness. My planning process might use curiosity, but it isn’t aligned with curiosity, in the sense that we make plans that maximize our curiosity (usually). We just do what’s interesting.
In contrast, social status is a concept that humans learn, and it does look like the planning process is aligned with the status concept, in that (some) humans habitually make plans that are relatively well described as status maximizing.
Or another way of saying it. Our status motivations are not straightforward adaption execution. It’s recruiting the general intelligence in service of this concept, in much the way that we would want an AGI to be aligned with a concept like the Good or corrigibility.
Romantic love, again people act on (including using their general intelligence), but their planning process is not in general aligned with maximization of romantic love. (Indeed, I’m editorializing human nature here, but it looks to me like romantic love is mostly a strategy to get other goals).
Altruism—It’s debatable whether most instances of maximizing altruistic impact are better described as status maximization. Regardless, this is an overriding strategic goal, recruiting general intelligence, for a very small fraction of humans.
Those are motivations but they don’t (mostly) have the type signature of “goals” but rather the type signature of “drives”.
I pursue interesting stuff because I’m curious. That doesn’t require me to even have a concept of curiosity—it could in principle be steering me without my awareness. My planning process might use curiosity, but it isn’t aligned with curiosity, in the sense that we make plans that maximize our curiosity (usually). We just do what’s interesting.
In contrast, social status is a concept that humans learn, and it does look like the planning process is aligned with the status concept, in that (some) humans habitually make plans that are relatively well described as status maximizing.
Or another way of saying it. Our status motivations are not straightforward adaption execution. It’s recruiting the general intelligence in service of this concept, in much the way that we would want an AGI to be aligned with a concept like the Good or corrigibility.
Romantic love, again people act on (including using their general intelligence), but their planning process is not in general aligned with maximization of romantic love. (Indeed, I’m editorializing human nature here, but it looks to me like romantic love is mostly a strategy to get other goals).
Altruism—It’s debatable whether most instances of maximizing altruistic impact are better described as status maximization. Regardless, this is an overriding strategic goal, recruiting general intelligence, for a very small fraction of humans.