“A sufficiently intelligent agent will try to prevent its goals[1] from changing, at least if it is consequentialist.”
It seems that in humans, smarter people are more able and likely to change their goals. A smart person may change his/her views about how the universe can best be arranged upon reading Nick Bostrom’s book DeepUtopia, for example.
’I think humans are stable, multi-objective systems, at least in the short term. Our goals and beliefs change, but we preserve our important values over most of those changes. Even when gaining or losing religion, most people seem to maintain their goal of helping other people (if they have such a goal); they just change their beliefs about how to best do that.“
A human may change from wanting to help people to not wanting to help people if he/she got 5 hours of sleep instead of 8.
I think my terminology isn’t totally clear. By “goals” in that statement, I mean what we mean by “’values” in humans. The two are used in overlapping and mostly interchangable ways in my writing
Humans aren’t sufficiently intelligent to be all that internally consistent
In many cases of humans changing goals, I’d say they’re actually changing subgoals, while their central goal (be happy/satisfied/joyous) remains the same. This may be described as changing goals while keeping the same values.
Note ‘in the short term’ (I think you’re quoting Bostrom? The context isn’t quite clear). In the long term, with increasing intelligence and self-awareness, I’d expect some of people’s goals to change as they become more self-aware and work toward more internal coherence (e.g., many people change their goal of eating delicious food when they realize it conflicts with their more important goal of being happy and living a a long life).
Yes, humans may change exactly that way. A friend said he’d gotten divorced after getting a CPAP to solve his sleep apnea: “When we got married, we were both sad and angry people. Now I’m not.” But that’s because we’re pretty random and biology determined.
“A sufficiently intelligent agent will try to prevent its goals[1] from changing, at least if it is consequentialist.”
It seems that in humans, smarter people are more able and likely to change their goals. A smart person may change his/her views about how the universe can best be arranged upon reading Nick Bostrom’s book Deep Utopia, for example.
’I think humans are stable, multi-objective systems, at least in the short term. Our goals and beliefs change, but we preserve our important values over most of those changes. Even when gaining or losing religion, most people seem to maintain their goal of helping other people (if they have such a goal); they just change their beliefs about how to best do that.“
A human may change from wanting to help people to not wanting to help people if he/she got 5 hours of sleep instead of 8.
I think my terminology isn’t totally clear. By “goals” in that statement, I mean what we mean by “’values” in humans. The two are used in overlapping and mostly interchangable ways in my writing
Humans aren’t sufficiently intelligent to be all that internally consistent
In many cases of humans changing goals, I’d say they’re actually changing subgoals, while their central goal (be happy/satisfied/joyous) remains the same. This may be described as changing goals while keeping the same values.
Note ‘in the short term’ (I think you’re quoting Bostrom? The context isn’t quite clear). In the long term, with increasing intelligence and self-awareness, I’d expect some of people’s goals to change as they become more self-aware and work toward more internal coherence (e.g., many people change their goal of eating delicious food when they realize it conflicts with their more important goal of being happy and living a a long life).
Yes, humans may change exactly that way. A friend said he’d gotten divorced after getting a CPAP to solve his sleep apnea: “When we got married, we were both sad and angry people. Now I’m not.” But that’s because we’re pretty random and biology determined.
Both quotes are from your above post. Apologies for confusion.