> From the inside, this is an experience that in-the-moment is enjoyable/satisfying/juicy/fun/rewarding/attractive to you/thrilling/etc etc.
people’s preferences change in different contexts since they are implicitly always trying to comply with what they think is permissible/safe before trying to get it, up to some level of stake outweighing this, along many different axes of things one can have a stake in
to see people’s intrinsic preferences we have to consider that people often aren’t getting what they want and are tricked into wanting suboptimal things wrt some of their long-suppressed wants, because of social itself
this has to be really rigorous because it’s competing against anti-inductive memes
this is really important to model because if we know anything about people’s terminal preferences modulo social we know we are confused about social anytime we can’t explain why they aren’t pursuing opportunities they should know about or anytime they are internally conflicted even though they know all the consequences of their actions relative to their real ideal-to-them terminal preferences
> Social sort of exists here, but only in the form that if an agent can give something you want, such as snuggles, then you want that interaction.
is it social if a human wants another human to be smiling because perception of smiles is good?
is it social if a human wants another human to be smiling because perception of smiles is good?
I wouldn’t say so, no.
good point about lots of level 1 things being distorted or obscured by level 3. I think the model needs to be restructured to not have a privileged instrinsicness to level 1, but rather initialize moment to moment preferences with one thing, then update that based on pressures from the other things
> From the inside, this is an experience that in-the-moment is enjoyable/satisfying/juicy/fun/rewarding/attractive to you/thrilling/etc etc.
people’s preferences change in different contexts since they are implicitly always trying to comply with what they think is permissible/safe before trying to get it, up to some level of stake outweighing this, along many different axes of things one can have a stake in
to see people’s intrinsic preferences we have to consider that people often aren’t getting what they want and are tricked into wanting suboptimal things wrt some of their long-suppressed wants, because of social itself
this has to be really rigorous because it’s competing against anti-inductive memes
this is really important to model because if we know anything about people’s terminal preferences modulo social we know we are confused about social anytime we can’t explain why they aren’t pursuing opportunities they should know about or anytime they are internally conflicted even though they know all the consequences of their actions relative to their real ideal-to-them terminal preferences
> Social sort of exists here, but only in the form that if an agent can give something you want, such as snuggles, then you want that interaction.
is it social if a human wants another human to be smiling because perception of smiles is good?
I wouldn’t say so, no.
good point about lots of level 1 things being distorted or obscured by level 3. I think the model needs to be restructured to not have a privileged instrinsicness to level 1, but rather initialize moment to moment preferences with one thing, then update that based on pressures from the other things