My first instinct is to say this is a wrong question, in the sense that it doesn’t arise but rather pre-exists, and either survives or is suppressed. There’s a small group that learns explicitly about utility functions and starts doing more maximization, but mostly self-interest starts out as something people care about? And then they learn to stop, through a combination of implicit instruction and observation, gradual conditioning and so on, and/or that those that don’t stop get selected out?
Where in some places these suppression and replacement effects are very large, and in other places where people have to sit around doing real things the effects are small or even non-existent and then people can act in their own interests or in the interests of those around them or towards whatever goal they care about.
There’s still some of it there in almost all cases, even if it’s suppressed, and when someone has sufficiently large self-interests (or other things they value, doesn’t have to be selfish) at stake, that creates an opportunity to shock the person into reality and to caring about outcomes increasingly directly and explicitly. But it’s not reliable. Some (not many, but some) people are so far gone they really do give up everything that matters or literally die before that happens even without an intentional boil-the-frog strategy designed to push them to do that, and if you use such a strategy you can do that to a lot more people.
So essentially, self-interest (in the sense of caring about any outcomes at all relative to any other outcomes at all) is the baseline scenario, which gets increasingly suppressed under some conditions including mazes, in the extreme with severe selection effects against anyone not actively acting against such interests as a way of passing the continuous no-utility-function tests others implicitly are imposing. Then they muddle along this way until sufficiently high and sufficiently clear and visible stakes shock some of them into utility-function mode at least temporarily, and if not enough of them do that enough then reality causes the whole thing to come crashing down and get defeated by outsiders and the cycle starts again.
Freeform answer:
My first instinct is to say this is a wrong question, in the sense that it doesn’t arise but rather pre-exists, and either survives or is suppressed. There’s a small group that learns explicitly about utility functions and starts doing more maximization, but mostly self-interest starts out as something people care about? And then they learn to stop, through a combination of implicit instruction and observation, gradual conditioning and so on, and/or that those that don’t stop get selected out?
Where in some places these suppression and replacement effects are very large, and in other places where people have to sit around doing real things the effects are small or even non-existent and then people can act in their own interests or in the interests of those around them or towards whatever goal they care about.
There’s still some of it there in almost all cases, even if it’s suppressed, and when someone has sufficiently large self-interests (or other things they value, doesn’t have to be selfish) at stake, that creates an opportunity to shock the person into reality and to caring about outcomes increasingly directly and explicitly. But it’s not reliable. Some (not many, but some) people are so far gone they really do give up everything that matters or literally die before that happens even without an intentional boil-the-frog strategy designed to push them to do that, and if you use such a strategy you can do that to a lot more people.
So essentially, self-interest (in the sense of caring about any outcomes at all relative to any other outcomes at all) is the baseline scenario, which gets increasingly suppressed under some conditions including mazes, in the extreme with severe selection effects against anyone not actively acting against such interests as a way of passing the continuous no-utility-function tests others implicitly are imposing. Then they muddle along this way until sufficiently high and sufficiently clear and visible stakes shock some of them into utility-function mode at least temporarily, and if not enough of them do that enough then reality causes the whole thing to come crashing down and get defeated by outsiders and the cycle starts again.