While these sound good, the rationale for why these are good goals is usually pretty hand wavy (or maybe I just don’t understand it).
At some point you just got to start with some values. You can’t “justify” all of your values. You got to start somewhere. And there is no “research” that could tell you what values to start with.
Luckily, you already have some core values.
The goals you should pursue are the ones that help you realize those values.
but there are a ton of important questions where I don’t even know what the goal is
You seem to think that finding the “right” goals is just like learning any mundane fact about the world. People can’t tell you what to want in life like they can explain math to you. It’s just something you have to feel out for yourself.
The next step is to distinguish between terminal and instrumental values, or perhaps we could call them “goals” and “strategies”. Which things you want because they feel intrinsically valuable, and which things you want because they seem like a good idea to help you achieve the former.
For example, a goal may be “to be respected by others”, and a possible strategy is “get formal education”. It may be a bit complicated to disentangle, but imagine something like this:
Someone that asks you “if you 100% knew that people will always respect you, would you still want to get formal education?” and you say “well, I am also curious about how things work, and I also want to get a good job with a good salary” and they say “ok, so imagine that you 100% knew that people will respect you, and you can always find everything clearly explained on Wikipedia or Khan Academy, and the jobs would accept you based on what you already accomplished, ignoring your diploma… would you still want to get formal education?”—and if you say “no”, then education was just your strategy, not your goal.
On the other hand, if someone asks “why do you want to be respected by others” and you say “I guess it makes people more likely to listen to me, and it makes me feel safe” and they say “so if you 100% knew that you are perfectly safe, and people would always listen to what you say, they just will really disrespect you, would that be okay for you?” and you say “no; I just want to feel respected even if it serves no specific purpose”, then it is your goal.
Sometimes people go a bit too far and say “well, what everyone actually wants is to feel good, isn’t it?”. And while it is true that getting what you wanted usually makes you feel good, a mere feel-good pill is not what we actually want. If you feel disrespected, you wouldn’t ask for a pill that makes you falsely believe that you are respected. Feelings are (imprecise) indicators of our values, not the values themselves.
Hi weathersystems and Viliam, these are great ways to think about it! I especially like the link to terminal/intrinsic values, that does clarify a lot!
Using some of the terminology from that link, I would say it feels like my brain’s built in values are mostly a big subgoal stomp, of mutually contradictory, inconsistent, and changeable values.
While those built in values work pretty well for many situations, e.g. I definitely wouldn’t just randomly jump out a window, it seems like they provide very weak guidance for me when making a lot of important decisions, e.g. valuing work vs valuing family (which I both love).
Instead of relying on this subgoal stomp, it feels like my brain has this longing to find a small, principled, consistent set of terminal values that I could use to make decisions instead.
By way of analogy, maybe think about it like choosing a phone plan. My brain definitely has built in heuristics that would allow me to choose a phone plan (badly), but my brain is so happy that I know math and can use that to make the decision instead.
It sounds like both of you may have gone through the exercise of find terminal goals that work for you. Would you mind sharing some of findings you had? I think having some examples of what works for other people, might help me a lot on my search.
The article that Viliam pointed out (thanks again!), talks about inclusive genetic fitness as the goal that humans seem to approximate with their actual goals. Inclusive genetic fitness seems like it may be a reasonable terminal goal to replace the subgoal stomp. Are there resources that provide more details on this topic? (e.g. having the goal of maximizing inclusive fitness in the short term would look very different from maximizing inclusive fitness in the long term).
If you liked the article, you might want to try the whole book. It will also answer your question about replacing your goals with inclusive genetic fitness. (Spoiler: bad idea, evolution is not your friend.)
With problems like “my work or my family”, the key is usually to get out of this frame and instead think about specific details. What parts of your work do you actually like? What activities with your family do you like? Is it possible to work part-time while your kids are small? Or to work full-time and hire a babysitter? Start a company for people with little kids, which would provide a kindergarten for the employees’ kids? Or find a group of people who will work remotely for different companies, but live together and share the childcare? If you love your job, find a partner who hates theirs, and agree that the partner will take 100% care of the kids? Etc.
I would say it feels like my brain’s built in values are mostly a big subgoal stomp, of mutually contradictory, inconsistent, and changeable values. [...]
it feels like my brain has this longing to find a small, principled, consistent set of terminal values that I could use to make decisions instead.
Here’s a slate star codex piece on our best guess on how our motivational system works: https://slatestarcodex.com/2018/02/07/guyenet-on-motivation/.It’s essentially just a bunch of small mostly independent modules all fighting for control of the body to act according to what they want.
I don’t think there’s any way out of having “mutually contradictory, inconsistent, and changeable values.” We just gotta negotiate between these as best we can.
There are at least a couple problems with trying to come up with a “small, principled, consistent set of terminal values” you could use to make decisions.
You’re never gonna be able to do it in a way that covered all edge cases.
Even if you were able to come up with the “right” system, you wouldn’t actually be able to follow it. Because our actual motivational systems aren’t simple rule following systems. You’re gonna want what you want, even if your predetermined system says to do otherwise.
You don’t really get to decide what your terminal values are. I mean you can fudge it a bit, but you certainly don’t have complete control over them (and thank god).
Negotiating between competing values isn’t something you can smooth over with a few rules. Instead it requires some degree of finesse and moment to moment awareness.
Do you play any board games? In chess there are a lot of what we can call “values.” Better to keep your king safe, control the center, don’t double your pawns etc. But there’s no “small, principled, consistent set of” rules you can use to negotiate between these. It’s always gotta be felt out in each new situation.
And life is much messier and more complex than something like chess.
It sounds like both of you may have gone through the exercise of find terminal goals that work for you.
I “found terminal goals” in the sense that I tried to figure out what were the main things I wanted in life. I came up with some sort of list (which will probably change in the future). It’s a short list, but definitely not principled or consistent :D. Occasionally it does help to keep me focused on what matters to me. If I find myself spending a lot of time doing stuff that doesn’t go in one of those directions, I try to put myself more on track.
If you want I can try to figure out how I got there. But it seems like your more concerned with the deciding between competing values thing.
Inclusive genetic fitness seems like it may be a reasonable terminal goal to replace the subgoal stomp.
Ya definitely don’t do that. If you did that you’d just spend all your time donating sperm or something.
Thanks Viliam and weathersystems! Sorry it took me a little while to respond, I wanted to make sure I had read and understood the pointers to the related work you guys provided.
Ya definitely don’t do that. If you did that you’d just spend all your time donating sperm or something.
If you liked the article, you might want to try the whole book. It will also answer your question about replacing your goals with inclusive genetic fitness. (Spoiler: bad idea, evolution is not your friend.)
I spent some time digging deeper into inclusive fitness. Social Evolution and Inclusive Fitness Theory: An Introduction by James A. R. Marshall provides a good summary.
There are indeed proofs which show that evolution selects for those individuals that always maximize inclusive genetic fitness in the moment. That said, these proofs assume that maximizing offspring in the moment won’t hurt your chances of creating offspring in the future.
So these proofs don’t really apply to the situations that humans find themselves in. Raping the nearest person may give you the highest inclusive genetic fitness in the moment, but you will go to jail and won’t be able to reproduce any further, so this behavior won’t be favored by evolution in the long run (thank god).
So yeah, definitely don’t maximize short term inclusive genetic fitness.
So what about maximizing inclusive genetic fitness in the long term, say 1 billion years? I couldn’t find any papers analyzing what evolution would do to a strategy like that, but it intuitively sounds a lot better. If that was your terminal goal, you would probably want to push science forward, advance technology, spread the human race as far out into the universe as you can, etc. Honestly, those sound like pretty reasonable things I’d be happy to support.
1. You’re never gonna be able to do it in a way that covered all edge cases.
I completely agree that this is a possible (even likely) scenario. I really don’t know, that’s why I’m asking :)
2. Even if you were able to come up with the “right” system, you wouldn’t actually be able to follow it. Because our actual motivational systems aren’t simple rule following systems. You’re gonna want what you want, even if your predetermined system says to do otherwise.
3. You don’t really get to decide what your terminal values are. I mean you can fudge it a bit, but you certainly don’t have complete control over them (and thank god).
I agree that it’s going to be impossible to completely change my behavior, just because I change my value system. e.g. no matter what the terminal goal would be, I’d still spend many hours a day sleeping, I’d still need narcotics to go through surgery, and I still wouldn’t be able to eat food that tastes absolutely terrible.
That said, I’d say there’s like 80% of my after tax income, 100% of my wealth, and maybe 12 hours a day that I can allocate pretty freely. For example, I think there would be a huge difference between living a life where I donate all my income, where I travel the world, where I play video games all day, where I have 12 kids, where I work for a random company 12 hours a day, or where I live on welfare. All of those are actions that I could absolutely take, if I (and maybe my wife) were convinced that they are the right thing to do.
Do you play any board games? In chess there are a lot of what we can call “values.” Better to keep your king safe, control the center, don’t double your pawns etc. But there’s no “small, principled, consistent set of” rules you can use to negotiate between these. It’s always gotta be felt out in each new situation.
Completely agree, even if you know the terminal goal, that doesn’t mean you’d have the computational capacity to always act optimally. That said, I’d be very happy if I were in a state like chess, i.e. where my terminal goal is very clear, even if I’m not always 100% able to take the perfect action to achieve that goal.
With problems like “my work or my family”, the key is usually to get out of this frame and instead think about specific details.
I love this sentiment. I generally think people underestimate how often you can have the cake and eat it too if you just think a bit outside the box.
At some point you just got to start with some values. You can’t “justify” all of your values. You got to start somewhere. And there is no “research” that could tell you what values to start with.
Luckily, you already have some core values.
The goals you should pursue are the ones that help you realize those values.
You seem to think that finding the “right” goals is just like learning any mundane fact about the world. People can’t tell you what to want in life like they can explain math to you. It’s just something you have to feel out for yourself.
Let me know if I’m miss-reading you.
The next step is to distinguish between terminal and instrumental values, or perhaps we could call them “goals” and “strategies”. Which things you want because they feel intrinsically valuable, and which things you want because they seem like a good idea to help you achieve the former.
For example, a goal may be “to be respected by others”, and a possible strategy is “get formal education”. It may be a bit complicated to disentangle, but imagine something like this:
Someone that asks you “if you 100% knew that people will always respect you, would you still want to get formal education?” and you say “well, I am also curious about how things work, and I also want to get a good job with a good salary” and they say “ok, so imagine that you 100% knew that people will respect you, and you can always find everything clearly explained on Wikipedia or Khan Academy, and the jobs would accept you based on what you already accomplished, ignoring your diploma… would you still want to get formal education?”—and if you say “no”, then education was just your strategy, not your goal.
On the other hand, if someone asks “why do you want to be respected by others” and you say “I guess it makes people more likely to listen to me, and it makes me feel safe” and they say “so if you 100% knew that you are perfectly safe, and people would always listen to what you say, they just will really disrespect you, would that be okay for you?” and you say “no; I just want to feel respected even if it serves no specific purpose”, then it is your goal.
Sometimes people go a bit too far and say “well, what everyone actually wants is to feel good, isn’t it?”. And while it is true that getting what you wanted usually makes you feel good, a mere feel-good pill is not what we actually want. If you feel disrespected, you wouldn’t ask for a pill that makes you falsely believe that you are respected. Feelings are (imprecise) indicators of our values, not the values themselves.
Hi weathersystems and Viliam, these are great ways to think about it! I especially like the link to terminal/intrinsic values, that does clarify a lot!
Using some of the terminology from that link, I would say it feels like my brain’s built in values are mostly a big subgoal stomp, of mutually contradictory, inconsistent, and changeable values.
While those built in values work pretty well for many situations, e.g. I definitely wouldn’t just randomly jump out a window, it seems like they provide very weak guidance for me when making a lot of important decisions, e.g. valuing work vs valuing family (which I both love).
Instead of relying on this subgoal stomp, it feels like my brain has this longing to find a small, principled, consistent set of terminal values that I could use to make decisions instead.
By way of analogy, maybe think about it like choosing a phone plan. My brain definitely has built in heuristics that would allow me to choose a phone plan (badly), but my brain is so happy that I know math and can use that to make the decision instead.
It sounds like both of you may have gone through the exercise of find terminal goals that work for you. Would you mind sharing some of findings you had? I think having some examples of what works for other people, might help me a lot on my search.
The article that Viliam pointed out (thanks again!), talks about inclusive genetic fitness as the goal that humans seem to approximate with their actual goals. Inclusive genetic fitness seems like it may be a reasonable terminal goal to replace the subgoal stomp. Are there resources that provide more details on this topic? (e.g. having the goal of maximizing inclusive fitness in the short term would look very different from maximizing inclusive fitness in the long term).
If you liked the article, you might want to try the whole book. It will also answer your question about replacing your goals with inclusive genetic fitness. (Spoiler: bad idea, evolution is not your friend.)
With problems like “my work or my family”, the key is usually to get out of this frame and instead think about specific details. What parts of your work do you actually like? What activities with your family do you like? Is it possible to work part-time while your kids are small? Or to work full-time and hire a babysitter? Start a company for people with little kids, which would provide a kindergarten for the employees’ kids? Or find a group of people who will work remotely for different companies, but live together and share the childcare? If you love your job, find a partner who hates theirs, and agree that the partner will take 100% care of the kids? Etc.
Here’s a slate star codex piece on our best guess on how our motivational system works: https://slatestarcodex.com/2018/02/07/guyenet-on-motivation/.It’s essentially just a bunch of small mostly independent modules all fighting for control of the body to act according to what they want.
I don’t think there’s any way out of having “mutually contradictory, inconsistent, and changeable values.” We just gotta negotiate between these as best we can.
There are at least a couple problems with trying to come up with a “small, principled, consistent set of terminal values” you could use to make decisions.
You’re never gonna be able to do it in a way that covered all edge cases.
Even if you were able to come up with the “right” system, you wouldn’t actually be able to follow it. Because our actual motivational systems aren’t simple rule following systems. You’re gonna want what you want, even if your predetermined system says to do otherwise.
You don’t really get to decide what your terminal values are. I mean you can fudge it a bit, but you certainly don’t have complete control over them (and thank god).
Negotiating between competing values isn’t something you can smooth over with a few rules. Instead it requires some degree of finesse and moment to moment awareness.
Do you play any board games? In chess there are a lot of what we can call “values.” Better to keep your king safe, control the center, don’t double your pawns etc. But there’s no “small, principled, consistent set of” rules you can use to negotiate between these. It’s always gotta be felt out in each new situation.
And life is much messier and more complex than something like chess.
I “found terminal goals” in the sense that I tried to figure out what were the main things I wanted in life. I came up with some sort of list (which will probably change in the future). It’s a short list, but definitely not principled or consistent :D. Occasionally it does help to keep me focused on what matters to me. If I find myself spending a lot of time doing stuff that doesn’t go in one of those directions, I try to put myself more on track.
If you want I can try to figure out how I got there. But it seems like your more concerned with the deciding between competing values thing.
Ya definitely don’t do that. If you did that you’d just spend all your time donating sperm or something.
Thanks Viliam and weathersystems! Sorry it took me a little while to respond, I wanted to make sure I had read and understood the pointers to the related work you guys provided.
I spent some time digging deeper into inclusive fitness. Social Evolution and Inclusive Fitness Theory: An Introduction by James A. R. Marshall provides a good summary.
There are indeed proofs which show that evolution selects for those individuals that always maximize inclusive genetic fitness in the moment. That said, these proofs assume that maximizing offspring in the moment won’t hurt your chances of creating offspring in the future.
So these proofs don’t really apply to the situations that humans find themselves in. Raping the nearest person may give you the highest inclusive genetic fitness in the moment, but you will go to jail and won’t be able to reproduce any further, so this behavior won’t be favored by evolution in the long run (thank god).
So yeah, definitely don’t maximize short term inclusive genetic fitness.
So what about maximizing inclusive genetic fitness in the long term, say 1 billion years? I couldn’t find any papers analyzing what evolution would do to a strategy like that, but it intuitively sounds a lot better. If that was your terminal goal, you would probably want to push science forward, advance technology, spread the human race as far out into the universe as you can, etc. Honestly, those sound like pretty reasonable things I’d be happy to support.
I completely agree that this is a possible (even likely) scenario. I really don’t know, that’s why I’m asking :)
I agree that it’s going to be impossible to completely change my behavior, just because I change my value system. e.g. no matter what the terminal goal would be, I’d still spend many hours a day sleeping, I’d still need narcotics to go through surgery, and I still wouldn’t be able to eat food that tastes absolutely terrible.
That said, I’d say there’s like 80% of my after tax income, 100% of my wealth, and maybe 12 hours a day that I can allocate pretty freely. For example, I think there would be a huge difference between living a life where I donate all my income, where I travel the world, where I play video games all day, where I have 12 kids, where I work for a random company 12 hours a day, or where I live on welfare. All of those are actions that I could absolutely take, if I (and maybe my wife) were convinced that they are the right thing to do.
Completely agree, even if you know the terminal goal, that doesn’t mean you’d have the computational capacity to always act optimally. That said, I’d be very happy if I were in a state like chess, i.e. where my terminal goal is very clear, even if I’m not always 100% able to take the perfect action to achieve that goal.
I love this sentiment. I generally think people underestimate how often you can have the cake and eat it too if you just think a bit outside the box.