There are other ways of interpreting value stability; a satisficer is one example. But those don’t tend to be stable
That statement does not make sense. I hope if you read it with a fresh mind you can see why. “There are other ways of defining stable, but they are not stable.” Perhaps you need to taboo the word stable here?
And would those defaults and update procedures remain stable themselves?
No, and that’s the whole point! Stability is scary. Stability leads to Clippy. People wouldn’t want stable. They’d want sensible. Sensible updates its behavior based on new information.
“There are some agents that are defined to have constant value systems, where, nonetheless, the value system will drift in practice”.
Stability leads to Clippy.
There are many bad stable outcomes. And an unstable update system will eventually fall into one of them, because they’re attractor states. To avoid this, you need to define “sensible” in such a way as the agent never enters such states. You’ve effectively promoting a different kind of goal stability—a zone of stability, rather than a single point. It’s not intrinsically a bad idea, but it’s not clear that its easer than finding a single idea goal system. And it’s very underdefined at his point.
“There are some agents that are defined to have constant value systems, where, nonetheless, the value system will drift in practice”.
Ok, we are now quite deep in a threat that started with me pointing out that a constant value system might be a bad thing! People want machines whose actions align with their own morality, and humans don’t have constant value systems (maybe this is where we disagree?).
There are many bad stable outcomes. And an unstable update system will eventually fall into one of them, because they’re attractor states.
Why don’t we seem humans drifting into being sociopaths? E.g. starting as a normal, well adjusted human being and then becoming sociopaths as they get older?
Why don’t we seem humans drifting into being sociopaths? E.g. starting as a normal, well adjusted human being and then becoming sociopaths as they get older?
That’s an interesting question, partially because we’d want to copy that and implement it in AI. A large part of it seems to be social pressure, and lack of power: people must respond to social pressure, because they don’t have the power to ignore it (a superintelligent AI would be very different, as would a superintelligent human). This is also connected with some evolutionary instincts, which cause us to behave in many ways as if we were in a tribal society with high costs to deviant behaviour—even if this is no longer the case.
The other main reason is evolution itself: very good at producing robustness, terrible at efficiency. If/when humans start self modifying freely, I’d start being worried about that tendency for them too...
That statement does not make sense. I hope if you read it with a fresh mind you can see why. “There are other ways of defining stable, but they are not stable.” Perhaps you need to taboo the word stable here?
No, and that’s the whole point! Stability is scary. Stability leads to Clippy. People wouldn’t want stable. They’d want sensible. Sensible updates its behavior based on new information.
“There are some agents that are defined to have constant value systems, where, nonetheless, the value system will drift in practice”.
There are many bad stable outcomes. And an unstable update system will eventually fall into one of them, because they’re attractor states. To avoid this, you need to define “sensible” in such a way as the agent never enters such states. You’ve effectively promoting a different kind of goal stability—a zone of stability, rather than a single point. It’s not intrinsically a bad idea, but it’s not clear that its easer than finding a single idea goal system. And it’s very underdefined at his point.
Ok, we are now quite deep in a threat that started with me pointing out that a constant value system might be a bad thing! People want machines whose actions align with their own morality, and humans don’t have constant value systems (maybe this is where we disagree?).
Why don’t we seem humans drifting into being sociopaths? E.g. starting as a normal, well adjusted human being and then becoming sociopaths as they get older?
That’s an interesting question, partially because we’d want to copy that and implement it in AI. A large part of it seems to be social pressure, and lack of power: people must respond to social pressure, because they don’t have the power to ignore it (a superintelligent AI would be very different, as would a superintelligent human). This is also connected with some evolutionary instincts, which cause us to behave in many ways as if we were in a tribal society with high costs to deviant behaviour—even if this is no longer the case.
The other main reason is evolution itself: very good at producing robustness, terrible at efficiency. If/when humans start self modifying freely, I’d start being worried about that tendency for them too...