How can we build a society/world where there are strong optimization forces to enable people to choose System 2 preferences?
I think the real world qualifies quite well. People who listen to their System 2 achieve much more than people who are slaves to their System 1.
If you want stronger “optimization forces”, take away the safety net. Hunger and pain are excellent incentives. Not many people would allow themselves to get addicted to WoW if it means they’ll become homeless in a short while.
If you want stronger “optimization forces”, take away the safety net. Hunger and pain are excellent incentives.
Actual experiments in doing this have proven it to be extremely counterproductive. The more human effort needs to be poured into avoiding hunger, homelessness, and base pain, the less ends up available for serving “self-actualizing” goals, conforming to socially-approved-of lifestyles, or even increasing economic productivity.
If you have an intuition which tells you that punishing people makes them act smarter, it is wrong. Punishing people makes them spend mental effort on avoiding getting caught transgressing your norms when they could have spent that effort on something that was actually important.
the less ends up available for serving “self-actualizing” goals, conforming to socially-approved-of lifestyles,
LOL. For a lot of people “self-actualization” ends up with sitting on a couch in front of an idiot box, eating chips. Nowadays it might be in front of their FB feed, but that’s essentially the same. And I’m not sure what are “socially-approved-of lifestyles”—that seem to depend a lot on the society in question.
If you have an intuition which tells you that punishing people makes them act smarter, it is wrong.
No. My intuition is that the threat of pain/hunger/etc. makes people act. Incentives matter.
LOL. For a lot of people “self-actualization” ends up with sitting on a couch in front of an idiot box, eating chips. Nowadays it might be in front of their FB feed, but that’s essentially the same. And I’m not sure what are “socially-approved-of lifestyles”—that seem to depend a lot on the society in question.
Look, the mere fact that you condescend at and disapprove of the actions of others doesn’t mean you’ve proposed any kind of alternative (no, survivalism does not count, that problem was already solved), let alone demonstrated a metric by which your non-proposed alternative is superior (not even the “I like it” metric).
No. My intuition is that the threat of pain/hunger/etc. makes people act. Incentives matter.
Now explain why those actions or incentives matter, that is, what makes them superior to alternatives. No, sneering does not count.
That provided me with some perspective. I’d only been thinking of cases where we imposed limitations, such as those we use with Alcohol and addictive drugs. But, as you point out, there are also regulations which push us toward immediate gratification, rather than away. If, after much deliberation, we collectively decide that 99% of potential values are long term, then perhaps we’d wind up abolishing most or all such regulations, assuming that most System 2 values would benefit.
However, at least some System 2 values are likely orthogonal to these sorts of motivators. For instance, perhaps NaNoWriMo participation would go down in a world with fewer social and economic safety nets, since many people would be struggling up Maslow’s Hierarchy of Needs instead of writing. I’m not sure how large of a fraction of System 2 values would be aided by negative reinforcement. There would be a large number of people who would abandon their long-term goals in order to remove the negative stimuli ASAP. If the shortest path to removing the stimuli gets them 90% of the way toward a goal, then I’d expect most people to achieve the remaining 10%. However, for goals that are orthogonal to pain and hunger, we might actually expect a lower rate of achievement.
If descriptive ethics research shows that System 2 preferences dominate, and if the majority of that weighted value is held back by safety nets, then it’ll be time to start cutting through red tape. If System 2 preferences dominate, and the majority of moral weight is supported by safety nets, then perhaps we need more cushions or even Basic Income. If our considered preference is actually to “live in the moment” (System 1 preferences dominate) then perhaps we should optimize for wireheading, or whatever that utopia would look like.
More likely, this is an overly simplified model, and there are other concerns that I’m not taking into account but which may dominate the calculation. I completely missed the libertarian perspective, after all.
I think the real world qualifies quite well. People who listen to their System 2 achieve much more than people who are slaves to their System 1.
I think people who mainly listen to system 2 frequently suffer from akrasia. Productive people usually feel motivated to do what they are doing and that’s system 1.
I think the real world qualifies quite well. People who listen to their System 2 achieve much more than people who are slaves to their System 1.
If you want stronger “optimization forces”, take away the safety net. Hunger and pain are excellent incentives. Not many people would allow themselves to get addicted to WoW if it means they’ll become homeless in a short while.
Actual experiments in doing this have proven it to be extremely counterproductive. The more human effort needs to be poured into avoiding hunger, homelessness, and base pain, the less ends up available for serving “self-actualizing” goals, conforming to socially-approved-of lifestyles, or even increasing economic productivity.
If you have an intuition which tells you that punishing people makes them act smarter, it is wrong. Punishing people makes them spend mental effort on avoiding getting caught transgressing your norms when they could have spent that effort on something that was actually important.
LOL. For a lot of people “self-actualization” ends up with sitting on a couch in front of an idiot box, eating chips. Nowadays it might be in front of their FB feed, but that’s essentially the same. And I’m not sure what are “socially-approved-of lifestyles”—that seem to depend a lot on the society in question.
No. My intuition is that the threat of pain/hunger/etc. makes people act. Incentives matter.
Look, the mere fact that you condescend at and disapprove of the actions of others doesn’t mean you’ve proposed any kind of alternative (no, survivalism does not count, that problem was already solved), let alone demonstrated a metric by which your non-proposed alternative is superior (not even the “I like it” metric).
Now explain why those actions or incentives matter, that is, what makes them superior to alternatives. No, sneering does not count.
That provided me with some perspective. I’d only been thinking of cases where we imposed limitations, such as those we use with Alcohol and addictive drugs. But, as you point out, there are also regulations which push us toward immediate gratification, rather than away. If, after much deliberation, we collectively decide that 99% of potential values are long term, then perhaps we’d wind up abolishing most or all such regulations, assuming that most System 2 values would benefit.
However, at least some System 2 values are likely orthogonal to these sorts of motivators. For instance, perhaps NaNoWriMo participation would go down in a world with fewer social and economic safety nets, since many people would be struggling up Maslow’s Hierarchy of Needs instead of writing. I’m not sure how large of a fraction of System 2 values would be aided by negative reinforcement. There would be a large number of people who would abandon their long-term goals in order to remove the negative stimuli ASAP. If the shortest path to removing the stimuli gets them 90% of the way toward a goal, then I’d expect most people to achieve the remaining 10%. However, for goals that are orthogonal to pain and hunger, we might actually expect a lower rate of achievement.
If descriptive ethics research shows that System 2 preferences dominate, and if the majority of that weighted value is held back by safety nets, then it’ll be time to start cutting through red tape. If System 2 preferences dominate, and the majority of moral weight is supported by safety nets, then perhaps we need more cushions or even Basic Income. If our considered preference is actually to “live in the moment” (System 1 preferences dominate) then perhaps we should optimize for wireheading, or whatever that utopia would look like.
More likely, this is an overly simplified model, and there are other concerns that I’m not taking into account but which may dominate the calculation. I completely missed the libertarian perspective, after all.
I think people who mainly listen to system 2 frequently suffer from akrasia. Productive people usually feel motivated to do what they are doing and that’s system 1.