It seems correct to me to term the earlier set of values a more “naive set.*” To keep a semi-consistent emotional tone across classes, I’ll call the values you get later the “jaded set” (although I think jadedness is only one of several possible later-life attractor states, which I’ll go over later).
I believe it’s unreasonable to assume that any person’s set of naive values are, or were, perfect values. But it’s worth noting that there are several useful properties that you can generally associate with them, which I’ll list below.
My internal model of how value drift has worked within myself looks a lot more like “updating payoff matrices” and “re-binning things.” Its direction feels determined by not purely by drift, but by some weird mix of deliberate steering, getting dragged by currents, random walk, and more accurately mapping the surrounding environment.
That said, here’s some generalizations...
Naive:
Often oversimplified
This is useful in generating ideals, but bad for murphyjitsu
Some zones are sparsely-populated
A single strong value (positive or negative) anywhere near those zones will tend to color the assumed value of a large nearby area. Sometimes it does this incorrectly.
Generally stronger emotional affects (utility assignments), and larger step sizes
Fast, large-step, more likely to see erratic changes
Closer to black-and-white thinking; some zones have intense positive or negative associations, and if you perform nearest neighbor on sparse data, the cutoff can be steep
Jaded:
Usually, your direct experience of reality has been fed more data and is more useful.
Things might shift so the experientially-determined system plays a larger role determining what actions you take relative to theory
Experiential ~ Sys1, Theoretical ~ Sys2, but the match isn’t perfect. The former handles the bulk of the data right as it comes in, the later sees heavily-filtered useful stuff and makes cleaner models.**
Speaking generally… while a select set of things may be represented in a more complete and complex way (ex: aspects of jobs, some matters of practical and realistic politics, most of the things we call “wisdom”), other axes of value/development/learning may have gotten stuck.
Here are names for some of those sticky states: jaded, bitter, stuck, and conformist. See below for more detail.
** What are some problems with each? Theoretical can be bad at speed and time-discounting, while Experimental is bad at noticing the possibility of dangerous unknown-unknowns?
Jaded modes in more detail
Here are several standard archetypical jaded/stuck error-modes. I tend to think about this in terms of neural nets and markov chains, so I also put down the difficult-learning-terrain structures that I tend to associate with them.
Jaded: Flat low-affect zones offer no discernible learning slope. If you have lost the ability to assign different values to different states, that spells Game Over for learning along that axis.
Bitter: Overlearned/overgeneralized negatives are assigned to what would otherwise have been inferred to be a positive action by naive theory. Thenceforth, you are unable to collect data from that zone due to the strong expectations of intense negative-utility. Keep in mind that this is not always wrong. But when it is wrong, this can be massively frustrating.
Stuck: Occupying an eccentric zones on the wrong side of an uncanny valley. One common example is people who specialized into a dead field, and now refuse to switch.
Conformist: Situated in some strongly self-reinforcing zone, often at the peak of some locally-maximal hill. If the hill has steep moat-like drop-off, this is a common subtype of Stuck. In some sense, hill-climbing is just correct neural net behavior; this is only a problem because you might be missing a better global maximum, or might have different endorsed values than your felt ones.
An added element that can make this particularly problematic is that if you stay on a steep, tiny hill for long enough, the gains from hill-climbing can set up a positive incentive for learning small step size. If you believe that different hills have radically different maximums, and you aspire for global maximization, then this is something you very strongly want to avoid.
*From context, I’m inferring that we’re discussing not completely naive values, but something closer to the values people have in their late-teens early-twenties.
Naive modes in more detail
Naive values are often simplified values; younger people have many zones of sparse or merely theorized data, and are more prone to black-and-white thinking over large areas . They are also perhaps notable sometimes for their lack of stability, and large step-size (hence the phrase “it’s just a phase”). Someone in their early 20s is less likely than someone in their 40s to be in a stable holding pattern. There’s been less time to get stuck in a self-perpetuating rut, and your environment and mind are often changing fast enough that any holding patterns will often get shaken up when the outside environment alters if you give it enough time.
The pattern often does seem to be towards jaded (flat low-affect zones that offer no discernible learning slope), bitter (overlearned/overgeneralized negatives assigned to what would otherwise have been inferred to be a positive action), stuck (occupying eccentric zones on the wrong side of an uncanny valley), or conformist (strongly self-reinforcing zones, often at the peak of some locally-maximal hill).
Many people I know have at east one internal-measurement instrument that is so far skewed that you may as well consider it broken. Take, for instance, people who never believe that other people like them despite all evidence to the contrary. This is not always a given, but some lucky jaded people will have identified some of the zones where they cannot trust their own internal measuring devices, and may even be capable of having a bit of perspective about that in ways that are far less common among the naive. They might also have the epistemic humility to have a designated friend filling in that particular gap in their perception, something that requires trust built over time.
Thinking about value-drift:
It seems correct to me to term the earlier set of values a more “naive set.*” To keep a semi-consistent emotional tone across classes, I’ll call the values you get later the “jaded set” (although I think jadedness is only one of several possible later-life attractor states, which I’ll go over later).
I believe it’s unreasonable to assume that any person’s set of naive values are, or were, perfect values. But it’s worth noting that there are several useful properties that you can generally associate with them, which I’ll list below.
My internal model of how value drift has worked within myself looks a lot more like “updating payoff matrices” and “re-binning things.” Its direction feels determined by not purely by drift, but by some weird mix of deliberate steering, getting dragged by currents, random walk, and more accurately mapping the surrounding environment.
That said, here’s some generalizations...
Naive:
Often oversimplified
This is useful in generating ideals, but bad for murphyjitsu
Some zones are sparsely-populated
A single strong value (positive or negative) anywhere near those zones will tend to color the assumed value of a large nearby area. Sometimes it does this incorrectly.
Generally stronger emotional affects (utility assignments), and larger step sizes
Fast, large-step, more likely to see erratic changes
Closer to black-and-white thinking; some zones have intense positive or negative associations, and if you perform nearest neighbor on sparse data, the cutoff can be steep
Jaded:
Usually, your direct experience of reality has been fed more data and is more useful.
Things might shift so the experientially-determined system plays a larger role determining what actions you take relative to theory
Experiential ~ Sys1, Theoretical ~ Sys2, but the match isn’t perfect. The former handles the bulk of the data right as it comes in, the later sees heavily-filtered useful stuff and makes cleaner models.**
Speaking generally… while a select set of things may be represented in a more complete and complex way (ex: aspects of jobs, some matters of practical and realistic politics, most of the things we call “wisdom”), other axes of value/development/learning may have gotten stuck.
Here are names for some of those sticky states: jaded, bitter, stuck, and conformist. See below for more detail.
Related: Freedomspotting
** What are some problems with each? Theoretical can be bad at speed and time-discounting, while Experimental is bad at noticing the possibility of dangerous unknown-unknowns?
Jaded modes in more detail
Here are several standard archetypical jaded/stuck error-modes. I tend to think about this in terms of neural nets and markov chains, so I also put down the difficult-learning-terrain structures that I tend to associate with them.
Jaded: Flat low-affect zones offer no discernible learning slope. If you have lost the ability to assign different values to different states, that spells Game Over for learning along that axis.
Bitter: Overlearned/overgeneralized negatives are assigned to what would otherwise have been inferred to be a positive action by naive theory. Thenceforth, you are unable to collect data from that zone due to the strong expectations of intense negative-utility. Keep in mind that this is not always wrong. But when it is wrong, this can be massively frustrating.
(Related: The problems with reasoning about counterfactual actions as an embedded agent with a self-model)
Stuck: Occupying an eccentric zones on the wrong side of an uncanny valley. One common example is people who specialized into a dead field, and now refuse to switch.
Conformist: Situated in some strongly self-reinforcing zone, often at the peak of some locally-maximal hill. If the hill has steep moat-like drop-off, this is a common subtype of Stuck. In some sense, hill-climbing is just correct neural net behavior; this is only a problem because you might be missing a better global maximum, or might have different endorsed values than your felt ones.
An added element that can make this particularly problematic is that if you stay on a steep, tiny hill for long enough, the gains from hill-climbing can set up a positive incentive for learning small step size. If you believe that different hills have radically different maximums, and you aspire for global maximization, then this is something you very strongly want to avoid.
*From context, I’m inferring that we’re discussing not completely naive values, but something closer to the values people have in their late-teens early-twenties.
Naive modes in more detail
Naive values are often simplified values; younger people have many zones of sparse or merely theorized data, and are more prone to black-and-white thinking over large areas . They are also perhaps notable sometimes for their lack of stability, and large step-size (hence the phrase “it’s just a phase”). Someone in their early 20s is less likely than someone in their 40s to be in a stable holding pattern. There’s been less time to get stuck in a self-perpetuating rut, and your environment and mind are often changing fast enough that any holding patterns will often get shaken up when the outside environment alters if you give it enough time.
The pattern often does seem to be towards jaded (flat low-affect zones that offer no discernible learning slope), bitter (overlearned/overgeneralized negatives assigned to what would otherwise have been inferred to be a positive action), stuck (occupying eccentric zones on the wrong side of an uncanny valley), or conformist (strongly self-reinforcing zones, often at the peak of some locally-maximal hill).
Many people I know have at east one internal-measurement instrument that is so far skewed that you may as well consider it broken. Take, for instance, people who never believe that other people like them despite all evidence to the contrary. This is not always a given, but some lucky jaded people will have identified some of the zones where they cannot trust their own internal measuring devices, and may even be capable of having a bit of perspective about that in ways that are far less common among the naive. They might also have the epistemic humility to have a designated friend filling in that particular gap in their perception, something that requires trust built over time.