Is the moral of this really that all decisions should be made so as to maximize the ultimate goal of happiness x longevity (of you or everyone), in utilitarian fashion; whereas maximizing for subgoals is sometimes/often a poor proxy?
Or is it impractical to do utilitarian calculus all the time, but calculations/heuristics with the thin and thick lines can clarify the role of the subgoals so they can be used as adequate proxies?
(It’s partly unclear in my head as I didn’t grok the exact meaning of the lines & their thicknesses. And it’s too late at night for me to think about this!)
Ultimately you want to optimize for the supergoal. For some the supergoal is utilitarian happiness x longevity, but not for others. The post is agnostic on this question.
The best way to optimize for whatever the supergoal is might be to lean towards calculations, or it might be to lean towards heuristics. The post is agnostic on that question as well.
I think the big point in this post is that when you lose sight of what the supergoal actually is, you often fall into some bad failure modes and do a bad job of optimizing for it. (And also that simply being alive is, err, a pretty important thing.)
This summary of your post is exactly how I experienced it. From this reader’s perspective, you accomplished the goal of expressing this.
Also, I appreciate your agnosticism and acknowledgement that others may not have the same super goal. I do know quite a few people who have had or are having your experience.
I wonder if part of your experience is because you have a choice. Some do not, because, in their circumstances, complete attention must be focused on surviving. I wonder, also, if humans are a bit behind in their evolutionary adaptation to having this level of choice.
Your post also makes clear the incredible difficulty people face in AI alignment. It is difficult to align our own selves. We fall back on heuristics to save time and our own mental resources. There are multiple “right” answers. Rewards here have costs there. It’s difficult to assign weights. The weight values seem to fluctuate depending on where we focus our attention. If we spend too many resources trying to pick a direction, the paths meanwhile change and we have to reassess.
And there is the manipulation of incentives, particularly praise. Is the praise worth the cost? Did your start state set you up to put one foot in front of the next in response to praise? Do you always have to do a good job at being a CEO, a husband, a father? Is being in your wife’s company its own reward or are you doing the job of being a husband? Or do you feel both ways in fluctuating degrees? Also, it may be that the goal is not the only thing that directs your behavior. It may be that, sometimes, the push and pull of whatever small, repeated incentives are happening are guiding less planned behaviors. These less planned behaviors over time become your life.
Anyway, I appreciate what you have said and how you have said it.
Is the moral of this really that all decisions should be made so as to maximize the ultimate goal of happiness x longevity (of you or everyone), in utilitarian fashion; whereas maximizing for subgoals is sometimes/often a poor proxy?
Or is it impractical to do utilitarian calculus all the time, but calculations/heuristics with the thin and thick lines can clarify the role of the subgoals so they can be used as adequate proxies?
(It’s partly unclear in my head as I didn’t grok the exact meaning of the lines & their thicknesses. And it’s too late at night for me to think about this!)
Ultimately you want to optimize for the supergoal. For some the supergoal is utilitarian
happiness x longevity
, but not for others. The post is agnostic on this question.The best way to optimize for whatever the supergoal is might be to lean towards calculations, or it might be to lean towards heuristics. The post is agnostic on that question as well.
I think the big point in this post is that when you lose sight of what the supergoal actually is, you often fall into some bad failure modes and do a bad job of optimizing for it. (And also that simply being alive is, err, a pretty important thing.)
This summary of your post is exactly how I experienced it. From this reader’s perspective, you accomplished the goal of expressing this.
Also, I appreciate your agnosticism and acknowledgement that others may not have the same super goal. I do know quite a few people who have had or are having your experience.
I wonder if part of your experience is because you have a choice. Some do not, because, in their circumstances, complete attention must be focused on surviving. I wonder, also, if humans are a bit behind in their evolutionary adaptation to having this level of choice.
Your post also makes clear the incredible difficulty people face in AI alignment. It is difficult to align our own selves. We fall back on heuristics to save time and our own mental resources. There are multiple “right” answers. Rewards here have costs there. It’s difficult to assign weights. The weight values seem to fluctuate depending on where we focus our attention. If we spend too many resources trying to pick a direction, the paths meanwhile change and we have to reassess.
And there is the manipulation of incentives, particularly praise. Is the praise worth the cost? Did your start state set you up to put one foot in front of the next in response to praise? Do you always have to do a good job at being a CEO, a husband, a father? Is being in your wife’s company its own reward or are you doing the job of being a husband? Or do you feel both ways in fluctuating degrees? Also, it may be that the goal is not the only thing that directs your behavior. It may be that, sometimes, the push and pull of whatever small, repeated incentives are happening are guiding less planned behaviors. These less planned behaviors over time become your life.
Anyway, I appreciate what you have said and how you have said it.