The actual human motivation and decision system operates by something like “expected valence” where “valence” is determined by some complex and largely unconscious calculation. When you start asking questions about “meaning” it’s very easy to decouple your felt motivations (actually experienced and internally meaningful System-1-valid expected valence) from what you think your motivations ought to be (something like “utility maximization”, where “utility” is an abstracted, logical, System-2-valid rationalization). This is almost guaranteed to make you miserable, unless you’re lucky enough that your System-1 valence calculation happens to match your System-2 logical deduction of the correct utilitarian course.
Possible courses of action include:
1. Brute forcing it, just doing what System-2 calculates is correct. This will involve a lot of suffering, since your System-1 will be screaming bloody murder the whole time, and I think most people will simply fail to achieve this. They will break.
2. Retraining your System-1 to find different things intrinsically meaningful. This can also be painful because System-1 generally doesn’t enjoy being trained. Doing it slowly, and leveraging your social sphere to help warp reality for you, can help.
3. Giving up, basically. Determining that you’d rather just do things that don’t make you miserable, even if you’re being a bad utilitarian. This will cause ongoing low-level dissonance as you’re aware that System-2 has evaluated your actions as being suboptimal or even evil, but at least you can get out of bed in the morning and hold down a job.
There are probably other options. I think I basically tried option 1, collapsed into option 3, and then eventually found my people and stabilized into the slow glide of option 2.
The fact that utilitarianism is not only impossible for humans to execute but actually a potential cause of great internal suffering to even know about is probably not talked about enough.
The specific example I would go to here would be when I was vegetarian for a couple of years and eventually gave up on it and went back to eating meat. I basically felt like I had given up on what was “right” and went back to doing “evil”. But remaining vegetarian was increasingly miserable for me, so eventually I quit trying.
I think it’s actually more fair to call this a conflict between two different System-1 subagents. One of the definitive aspects of System-1 is that it doesn’t tend to be coherent. Part of System-1 wanted to not feel bad about killing animals, and a different part of System-1 wanted to eat bacon and not feel low-energy all the time. So here there was a very evident clash between two competing System-1 felt needs, and the one that System-2 disapproved of ended up being the winner, despite months and months of consistent badgering by System-2.
I think you see this a lot especially in younger people who think that they can derive their “values” logically and then become happier by pursuing their logically derived values. It takes a bit of age and experience to just empirically observe what your values appear to be based on what you actually end up doing and enjoying.
I’m reminded of the post Purchase Fuzzies and Utilons Separately.
The actual human motivation and decision system operates by something like “expected valence” where “valence” is determined by some complex and largely unconscious calculation. When you start asking questions about “meaning” it’s very easy to decouple your felt motivations (actually experienced and internally meaningful System-1-valid expected valence) from what you think your motivations ought to be (something like “utility maximization”, where “utility” is an abstracted, logical, System-2-valid rationalization). This is almost guaranteed to make you miserable, unless you’re lucky enough that your System-1 valence calculation happens to match your System-2 logical deduction of the correct utilitarian course.
Possible courses of action include:
1. Brute forcing it, just doing what System-2 calculates is correct. This will involve a lot of suffering, since your System-1 will be screaming bloody murder the whole time, and I think most people will simply fail to achieve this. They will break.
2. Retraining your System-1 to find different things intrinsically meaningful. This can also be painful because System-1 generally doesn’t enjoy being trained. Doing it slowly, and leveraging your social sphere to help warp reality for you, can help.
3. Giving up, basically. Determining that you’d rather just do things that don’t make you miserable, even if you’re being a bad utilitarian. This will cause ongoing low-level dissonance as you’re aware that System-2 has evaluated your actions as being suboptimal or even evil, but at least you can get out of bed in the morning and hold down a job.
There are probably other options. I think I basically tried option 1, collapsed into option 3, and then eventually found my people and stabilized into the slow glide of option 2.
The fact that utilitarianism is not only impossible for humans to execute but actually a potential cause of great internal suffering to even know about is probably not talked about enough.
The specific example I would go to here would be when I was vegetarian for a couple of years and eventually gave up on it and went back to eating meat. I basically felt like I had given up on what was “right” and went back to doing “evil”. But remaining vegetarian was increasingly miserable for me, so eventually I quit trying.
I think it’s actually more fair to call this a conflict between two different System-1 subagents. One of the definitive aspects of System-1 is that it doesn’t tend to be coherent. Part of System-1 wanted to not feel bad about killing animals, and a different part of System-1 wanted to eat bacon and not feel low-energy all the time. So here there was a very evident clash between two competing System-1 felt needs, and the one that System-2 disapproved of ended up being the winner, despite months and months of consistent badgering by System-2.
I think you see this a lot especially in younger people who think that they can derive their “values” logically and then become happier by pursuing their logically derived values. It takes a bit of age and experience to just empirically observe what your values appear to be based on what you actually end up doing and enjoying.