You don’t get to decide utilities so much as you have to figure out what they are. You already have a utility function, and you do your best to describe it . How do you weight the things you value relative to each other?
This takes observation, because what we think we value often turns out not to be a good description of our feelings and behavior.
By criticizing them. And conjecturing improvements which meet the challenges of the criticism. It is the same method as for improving all other knowledge.
In outline it is pretty simple. You may wonder things like what would be a good moral criticism. To that I would say: there’s many books full of examples, why dismiss all that? There is no one true way of arguing. Normal arguments are ok, I do not reject them all out of hand but try to meet their challenges. Even the ones with some kind of mistake (most of them), you can often find some substantive point which can be rescued. It’s important to engage with the best versions of theories you can think of.
BTW once upon a time I was vaguely socialist. Now I’m a (classical) liberal. People do change their fundamental moral values for the better in real life. I attended a speech by a former Muslim terrorist who is now a pro-Western Christian (walid shoebat).
I’ve changed my social values plenty of times, because I decided different policies better served my terminal values. If you wanted to convince me to support looser gun control, for instance, I would be amenable to that because my position on gun control is simply an avenue for satisfying my core values, which might better be satisfied in a different way.
If you tried to convince me to support increased human suffering as an end goal, I would not be amenable to that, unless it turns out I have some value I regard as even more important that would be served by it.
This is what Popper called the Myth of the Framework and refuted in his essay by that name. It’s just not true that everyone is totally set in their ways and extremely closed minded, as you suggest. People with different frameworks learn from each other.
One example is children learn. They are not born sharing their parents framework.
You probably think that frameworks are genetic, so they are. Dealing with that would take a lengthy discussion. Are you interested in this stuff? Would you read a book about it? Do you want to take it seriously?
I’m somewhat skeptical b/c e.g. you gave no reply to some of what I said.
I think a lot of the reason people don’t learn other frameworks, in practice, is merely that they choose not to. They think it sounds stupid (before they understand what it’s actually saying) and decide not to try.
When did I suggest that everyone is set in their ways and extremely closed minded? As I already pointed out, I’ve changed my own social values plenty of times. Our social frameworks are extremely plastic, because there are many possible ways to serve our terminal values.
I have responded to moral arguments with regards to more things than I could reasonably list here (economics, legal codes, etc.) I have done so because I was convinced that alternatives to my preexisting social framework better served my values.
Valuing strict gun control, to pick an example, is not genetically coded for. A person might have various inborn tendencies which will affect how they’re likely to feel about gun control; they might have innate predispositions towards authoritarianism or libertarianism, for instance, that will affect how they form their opinion. A person who valued freedom highly enough might support little or no gun control even if they were convinced that it would result in a greater loss of life. You would have a hard time finding anyone who valued freedom so much that they would support looser gun control if they were convinced it would destroy 90% of the world population, which gives you a bit of information about how they weight their preferences.
If you wanted to convince me to support more human suffering instead of more human happiness, you would have to appeal to something else I value even more that would be served by this. If you could argue that my preference for happiness is arbitrary, that preference for suffering is more natural, even if you could demonstrate that the moral goodness of human suffering is intrinsically inscribed on the fabric of the universe, why should I care? To make me want to make humans unhappy, you’d have to convince me there’s something else I want enough to make humans unhappy for its sake.
I also don’t feel I’m being properly understood here; I’m sorry if I’m not following up on everything, but I’m trying to focus on the things that I think meaningfully further the conversation, and I think some of your arguments are based on misapprehensions about where I’m coming from. You’ve already made it clear that you feel the same, but you can take it as assured that I’m both trying to understand you and make myself understood.
When did I suggest that everyone is set in their ways and extremely closed minded?
You suggested it about a category of ideas which you called “core values”.
If you wanted to convince me to support more human suffering instead of more human happiness, you would have to appeal to something else I value even more
You are saying that you are not open to new values which contradict your core values. Ultimately you might replace all but the one that is the most core, but never that one.
You are saying that you are not open to new values which contradict your core values. Ultimately you might replace all but the one that is the most core, but never that one.
That’s more or less correct. To quote one of Eliezer’s works of ridiculous fanfiction, “A moral system has room for only one absolute commandment; if two unbreakable rules collide, one has to give way.”
If circumstances force my various priorities into conflict, some must give way to others, and if I value one thing more than anything else, I must be willing to sacrifice anything else for it. That doesn’t necessarily make it my only terminal value; I might have major parts of my social framework which ultimately reduce to service to another value, and they’d have to bend if they ever came into conflict with a more heavily weighted value.
You don’t get to decide utilities so much as you have to figure out what they are. You already have a utility function, and you do your best to describe it . How do you weight the things you value relative to each other?
This takes observation, because what we think we value often turns out not to be a good description of our feelings and behavior.
From our genes? And the goal is just to figure out what it is, but not change it for the better?
Can you explain how you would change your fundamental moral values for the better?
By criticizing them. And conjecturing improvements which meet the challenges of the criticism. It is the same method as for improving all other knowledge.
In outline it is pretty simple. You may wonder things like what would be a good moral criticism. To that I would say: there’s many books full of examples, why dismiss all that? There is no one true way of arguing. Normal arguments are ok, I do not reject them all out of hand but try to meet their challenges. Even the ones with some kind of mistake (most of them), you can often find some substantive point which can be rescued. It’s important to engage with the best versions of theories you can think of.
BTW once upon a time I was vaguely socialist. Now I’m a (classical) liberal. People do change their fundamental moral values for the better in real life. I attended a speech by a former Muslim terrorist who is now a pro-Western Christian (walid shoebat).
I’ve changed my social values plenty of times, because I decided different policies better served my terminal values. If you wanted to convince me to support looser gun control, for instance, I would be amenable to that because my position on gun control is simply an avenue for satisfying my core values, which might better be satisfied in a different way.
If you tried to convince me to support increased human suffering as an end goal, I would not be amenable to that, unless it turns out I have some value I regard as even more important that would be served by it.
This is what Popper called the Myth of the Framework and refuted in his essay by that name. It’s just not true that everyone is totally set in their ways and extremely closed minded, as you suggest. People with different frameworks learn from each other.
One example is children learn. They are not born sharing their parents framework.
You probably think that frameworks are genetic, so they are. Dealing with that would take a lengthy discussion. Are you interested in this stuff? Would you read a book about it? Do you want to take it seriously?
I’m somewhat skeptical b/c e.g. you gave no reply to some of what I said.
I think a lot of the reason people don’t learn other frameworks, in practice, is merely that they choose not to. They think it sounds stupid (before they understand what it’s actually saying) and decide not to try.
When did I suggest that everyone is set in their ways and extremely closed minded? As I already pointed out, I’ve changed my own social values plenty of times. Our social frameworks are extremely plastic, because there are many possible ways to serve our terminal values.
I have responded to moral arguments with regards to more things than I could reasonably list here (economics, legal codes, etc.) I have done so because I was convinced that alternatives to my preexisting social framework better served my values.
Valuing strict gun control, to pick an example, is not genetically coded for. A person might have various inborn tendencies which will affect how they’re likely to feel about gun control; they might have innate predispositions towards authoritarianism or libertarianism, for instance, that will affect how they form their opinion. A person who valued freedom highly enough might support little or no gun control even if they were convinced that it would result in a greater loss of life. You would have a hard time finding anyone who valued freedom so much that they would support looser gun control if they were convinced it would destroy 90% of the world population, which gives you a bit of information about how they weight their preferences.
If you wanted to convince me to support more human suffering instead of more human happiness, you would have to appeal to something else I value even more that would be served by this. If you could argue that my preference for happiness is arbitrary, that preference for suffering is more natural, even if you could demonstrate that the moral goodness of human suffering is intrinsically inscribed on the fabric of the universe, why should I care? To make me want to make humans unhappy, you’d have to convince me there’s something else I want enough to make humans unhappy for its sake.
I also don’t feel I’m being properly understood here; I’m sorry if I’m not following up on everything, but I’m trying to focus on the things that I think meaningfully further the conversation, and I think some of your arguments are based on misapprehensions about where I’m coming from. You’ve already made it clear that you feel the same, but you can take it as assured that I’m both trying to understand you and make myself understood.
You suggested it about a category of ideas which you called “core values”.
You are saying that you are not open to new values which contradict your core values. Ultimately you might replace all but the one that is the most core, but never that one.
That’s more or less correct. To quote one of Eliezer’s works of ridiculous fanfiction, “A moral system has room for only one absolute commandment; if two unbreakable rules collide, one has to give way.”
If circumstances force my various priorities into conflict, some must give way to others, and if I value one thing more than anything else, I must be willing to sacrifice anything else for it. That doesn’t necessarily make it my only terminal value; I might have major parts of my social framework which ultimately reduce to service to another value, and they’d have to bend if they ever came into conflict with a more heavily weighted value.