It’s not that big a problem. Just make it so he makes you less happy.
The way I framed it originally your only choices were create him or not.
As opposed to the perfectly normal causal obligation to tile the universe with people us utilitarians have?
I don’t think we do. I think utilitarians have an obligation to create more people, but not a really large amount. I think the counterintuitive implications of total and average utilitarianism are caused by the fact that having high total and high average levels of utility are both good things, and that trying to maximize one at the expense of the other leads to dystopia. The ethical thing to do, I think, is to use some resources to create new people and some to enhance the life satisfaction of those that already exist. Moderation in all things.
You are able to have kids at least, and since you take after your parents, you’d acausally decide for them to make you by trying to have kids.
I don’t think human parents have good enough predictive capabilities for acausal trade with them to work. They aren’t Omega, they aren’t even Bob the Jerk. That being said, I do intend to have children. A moderate amount of children who will increase total utility without lowering average utility.
The way I framed it originally your only choices were create him or not.
I mean alter the problem so that instead of him making you miserable, he makes you less happy.
I think the counterintuitive implications of total and average utilitarianism are caused by the fact that having high total and high average levels of utility are both good things, and that trying to maximize one at the expense of the other leads to dystopia.
If you’re adding them, one will dominate. Are you multiplying them or something?
I don’t think it takes significantly more resources to have a happy human than a neutral human. It might at our current technology level, but that’s not always going to be a problem.
About the practical applications: you’d have to create people who would do good in their universe conditional on the fact that you’d make them, and they’d have to have a comparable prior probability of existing. More generally, you’d do something another likely agent would consider good (make paper clips, for example) when that agent would do what you consider good conditional on the fact that you’d do what they consider good.
I don’t really think the trade-offs would be worth while. We would have to have a significant comparative advantage making paperclips. Then again, maybe we’d have a bunch of spare non-semiconductors (or non-carbon, if you’re a carbon chauvinist), and the clippy would have a bunch of spare semiconductors, so we could do it cheaply.
Also, a lesser version of this works with EDT (and MWI). Clippys actually exist, just not in our Everett branch. The reason it’s lesser is that we can take the fact that we’re in this Everett branch as evidence that ours is more likely. The clippys would do the same if they use EDT, but there’s no reason we can’t do acausal trade with UDTers.
If you’re adding them, one will dominate. Are you multiplying them or something?
I’m regarding each as a single value that contributes, with diminishing returns, to an “overall value.” Because they have diminishing returns one can never dominate the other, they both have to increase at the same rate. The question isn’t “What should we maximize, total or average?” the question is “We have X resources, what percentage should we use to increase total utility and what percentage should we use to increase average utility?” I actually have grown to hate the word “maximize,” because trying to maximize things tends to lead to increasing one important value at the expense of others.
I’m also not saying that total and average utility are the only contributing factors to “overall value.” Other factors, such as equality of utility, also contribute.
I don’t think it takes significantly more resources to have a happy human than a neutral human. It might at our current technology level, but that’s not always going to be a problem.
I don’t just care about happiness. I care about satisfaction of preferences. Happiness is one very important preference, but it isn’t the only one. I want the human population to grow in size, and I also want us to grow progressively richer so we can satisfy more and more preferences. In other words, as we discover more resources we should allocate some towards creating more people and some towards enriching those who already exist.
About the practical applications: you’d have to create people who would do good in their universe conditional on the fact that you’d make them, and they’d have to have a comparable prior probability of existing.
Sorry, I was no longer talking about acausal trade when I said that. I was just talking about my normal, normative beliefs in regards to utilitarianism. It was in response to your claim that utilitarians have a duty to tile the universe with people even in a situation where there is no acausal trade involved.
That’s only a problem if they have expensive preferences. Don’t create people with expensive preferences. Create people whose preferences are either something that can be achieved via direct neural stimulation or something that’s going to happen anyway.
Sorry, I was no longer talking about acausal trade when I said that.
And I was no longer talking about your last comment. I was just talking about the general idea of your post.
Don’t create people with expensive preferences. Create people whose preferences are either something that can be achieved via direct neural stimulation or something that’s going to happen anyway.
Look, obviously when I said I want to enhance preference satisfaction in addition happiness, these were shorthand terms for far more complex moral beliefs that I contracted for the sake of brevity. Creating people with really really unambitious preferences would be an excessively simplified and literalistic interpretation of those moral rules that would lead to valueless and immoral results. I think we should call this sort of rules-lawyering where one follows an abbreviated form of morality strictly literally, rather than using it as a guideline for following a more complex set of values “moral munchkining,” after the practice in role-playing games of singlemindedly focusing on the combat and looting aspects of the game to the expense of everything else.
What I really think it is moral to do is create a world where:
*All existing morally significant creatures have very high individual and collective utility (utility defined as preferences satisfaction, positive emotions, and some other good stuff).
*There are a lot of those high utility creatures.
*There is some level equality of utility. Utility monsters shouldn’t get all the resources, even if they can be created.
*A large percentage of existing creatures should have very ambitious preferences, preferences that can never be fully satisfied. This is a good thing because it will encourage them to achieve more and personally grow.
*Their preferences should be highly satisfied because they are smart, strong, and have lots of friends, not because they are unambitious.
*A large percentage of those creatures should exhibit a lot of the human universals, such as love, curiosity, friendship, play, etc.
*The world should contain a great many more values that it would take even longer to list.
That is the sort of world we all have a moral duty to work towards creating. Not some dull world full of people who don’t want anything big or important. That is a dystopia I have a duty to work towards stopping. Morality isn’t simple. You can’t reduce it down to a one-sentence long command and then figure out the cheapest, most literalistic way to obey that one sentence. That is the road to hell.
Let me put it this way: Notch created Minecraft. It is awesome. There is nothing unambitious about it. It’s also something that exists entirely within a set of computers.
I suppose when I said “direct neural stimulation” it sounded like I meant something closer to wireheading. I just meant the matrix.
This is a good thing because it will encourage them to achieve more and personally grow.
I thought you were listing things you find intrinsically important.
Let me put it this way: Notch created Minecraft. It is awesome. There is nothing unambitious about it. It’s also something that exists entirely within a set of computers.
Agreed. I would count interacting with complex pieces of computing code as an “external referent.”
I suppose when I said “direct neural stimulation” it sounded like I meant something closer to wireheading. I just meant the matrix.
You’re right, when you said that I interpreted it to mean you were advocating wireheading, which I obviously find horrifying. The matrix, by contrast, is reasonably palatable.
I don’t see a world consisting mainly of matrix-dwellers as a dystopia, as long as the other people that they interact with in the matrix are real. A future where the majority of the population spends most of their time playing really complex and ambitious MMORPGs with each other would be a pretty awesome future.
I thought you were listing things you find intrinsically important.
I was, the personal growth thing is just a bonus. I probably should have left it out, it is confusing since everything else on the list involves terminal values.
Notch created Minecraft. It is awesome. There is nothing unambitious about it.
Nothing Unambitious? Really? It’s inspired by Dwarf Fortress. Being an order of magnitude or three less in depth, nuanced and challenging than the inspiration has to count as at least slightly unambitious.
I tend to regard computers as being part of the outside world. That’s why your initial comment confused me.
Still, your point that brain emulators in a matrix could live very rich, fulfilled, and happy lives that fulfill all basic human values, even if they rarely interact with the world outside the computers they inhabit, is basically sound.
That and I explained it badly. And I may or may not have originally meant wireheading and just convinced myself otherwise when it suited me. I can’t even tell.
I was giving an example of something awesome that has been done without altering the outside world. You just gave another example.
The claim “There is nothing unambitious about [minecraft]” is either plainly false or ascribed some meaning which is unrecognizable to me as my spoken language.
It’s not that big a problem. Just make it so he makes you less happy.
As opposed to the perfectly normal causal obligation to tile the universe with people us utilitarians have?
You are able to have kids at least, and since you take after your parents, you’d acausally decide for them to make you by trying to have kids.
The way I framed it originally your only choices were create him or not.
I don’t think we do. I think utilitarians have an obligation to create more people, but not a really large amount. I think the counterintuitive implications of total and average utilitarianism are caused by the fact that having high total and high average levels of utility are both good things, and that trying to maximize one at the expense of the other leads to dystopia. The ethical thing to do, I think, is to use some resources to create new people and some to enhance the life satisfaction of those that already exist. Moderation in all things.
I don’t think human parents have good enough predictive capabilities for acausal trade with them to work. They aren’t Omega, they aren’t even Bob the Jerk. That being said, I do intend to have children. A moderate amount of children who will increase total utility without lowering average utility.
I mean alter the problem so that instead of him making you miserable, he makes you less happy.
If you’re adding them, one will dominate. Are you multiplying them or something?
I don’t think it takes significantly more resources to have a happy human than a neutral human. It might at our current technology level, but that’s not always going to be a problem.
About the practical applications: you’d have to create people who would do good in their universe conditional on the fact that you’d make them, and they’d have to have a comparable prior probability of existing. More generally, you’d do something another likely agent would consider good (make paper clips, for example) when that agent would do what you consider good conditional on the fact that you’d do what they consider good.
I don’t really think the trade-offs would be worth while. We would have to have a significant comparative advantage making paperclips. Then again, maybe we’d have a bunch of spare non-semiconductors (or non-carbon, if you’re a carbon chauvinist), and the clippy would have a bunch of spare semiconductors, so we could do it cheaply.
Also, a lesser version of this works with EDT (and MWI). Clippys actually exist, just not in our Everett branch. The reason it’s lesser is that we can take the fact that we’re in this Everett branch as evidence that ours is more likely. The clippys would do the same if they use EDT, but there’s no reason we can’t do acausal trade with UDTers.
I’m regarding each as a single value that contributes, with diminishing returns, to an “overall value.” Because they have diminishing returns one can never dominate the other, they both have to increase at the same rate. The question isn’t “What should we maximize, total or average?” the question is “We have X resources, what percentage should we use to increase total utility and what percentage should we use to increase average utility?” I actually have grown to hate the word “maximize,” because trying to maximize things tends to lead to increasing one important value at the expense of others.
I’m also not saying that total and average utility are the only contributing factors to “overall value.” Other factors, such as equality of utility, also contribute.
I don’t just care about happiness. I care about satisfaction of preferences. Happiness is one very important preference, but it isn’t the only one. I want the human population to grow in size, and I also want us to grow progressively richer so we can satisfy more and more preferences. In other words, as we discover more resources we should allocate some towards creating more people and some towards enriching those who already exist.
Sorry, I was no longer talking about acausal trade when I said that. I was just talking about my normal, normative beliefs in regards to utilitarianism. It was in response to your claim that utilitarians have a duty to tile the universe with people even in a situation where there is no acausal trade involved.
That’s only a problem if they have expensive preferences. Don’t create people with expensive preferences. Create people whose preferences are either something that can be achieved via direct neural stimulation or something that’s going to happen anyway.
And I was no longer talking about your last comment. I was just talking about the general idea of your post.
Look, obviously when I said I want to enhance preference satisfaction in addition happiness, these were shorthand terms for far more complex moral beliefs that I contracted for the sake of brevity. Creating people with really really unambitious preferences would be an excessively simplified and literalistic interpretation of those moral rules that would lead to valueless and immoral results. I think we should call this sort of rules-lawyering where one follows an abbreviated form of morality strictly literally, rather than using it as a guideline for following a more complex set of values “moral munchkining,” after the practice in role-playing games of singlemindedly focusing on the combat and looting aspects of the game to the expense of everything else.
What I really think it is moral to do is create a world where:
*All existing morally significant creatures have very high individual and collective utility (utility defined as preferences satisfaction, positive emotions, and some other good stuff).
*There are a lot of those high utility creatures.
*There is some level equality of utility. Utility monsters shouldn’t get all the resources, even if they can be created.
*The creature’s feelings should have external referents.
*A large percentage of existing creatures should have very ambitious preferences, preferences that can never be fully satisfied. This is a good thing because it will encourage them to achieve more and personally grow.
*Their preferences should be highly satisfied because they are smart, strong, and have lots of friends, not because they are unambitious.
*A large percentage of those creatures should exhibit a lot of the human universals, such as love, curiosity, friendship, play, etc.
*The world should contain a great many more values that it would take even longer to list.
That is the sort of world we all have a moral duty to work towards creating. Not some dull world full of people who don’t want anything big or important. That is a dystopia I have a duty to work towards stopping. Morality isn’t simple. You can’t reduce it down to a one-sentence long command and then figure out the cheapest, most literalistic way to obey that one sentence. That is the road to hell.
Let me put it this way: Notch created Minecraft. It is awesome. There is nothing unambitious about it. It’s also something that exists entirely within a set of computers.
I suppose when I said “direct neural stimulation” it sounded like I meant something closer to wireheading. I just meant the matrix.
I thought you were listing things you find intrinsically important.
Agreed. I would count interacting with complex pieces of computing code as an “external referent.”
You’re right, when you said that I interpreted it to mean you were advocating wireheading, which I obviously find horrifying. The matrix, by contrast, is reasonably palatable.
I don’t see a world consisting mainly of matrix-dwellers as a dystopia, as long as the other people that they interact with in the matrix are real. A future where the majority of the population spends most of their time playing really complex and ambitious MMORPGs with each other would be a pretty awesome future.
I was, the personal growth thing is just a bonus. I probably should have left it out, it is confusing since everything else on the list involves terminal values.
Nothing Unambitious? Really? It’s inspired by Dwarf Fortress. Being an order of magnitude or three less in depth, nuanced and challenging than the inspiration has to count as at least slightly unambitious.
Well, if you’re measuring unambitiousness against the maximum possible ambitiousness you could have, then yes, being unambitious is trivial.
This is both true and utterly inapplicable.
I was giving an example of something awesome that has been done without altering the outside world. You just gave another example.
I tend to regard computers as being part of the outside world. That’s why your initial comment confused me.
Still, your point that brain emulators in a matrix could live very rich, fulfilled, and happy lives that fulfill all basic human values, even if they rarely interact with the world outside the computers they inhabit, is basically sound.
That and I explained it badly. And I may or may not have originally meant wireheading and just convinced myself otherwise when it suited me. I can’t even tell.
The claim “There is nothing unambitious about [minecraft]” is either plainly false or ascribed some meaning which is unrecognizable to me as my spoken language.
It was an exaggeration. It’s not pure ambition, but it’s not something anyone would consider unambitious.
Let’s not create people who don’t want to exist in the first place! Infinite free utility!