Why should I overcome my “bias” and not save my own child, just because there is some other child with a better chance of being saved, but which I do not care about as much?
Assuming that saving my child would give me X utility and saving the other child would give his parents X utility, it’s just a “shut up and multiply” kind of thing...
Assuming that saving my child would give me X utility and saving the other child would give his parents X utility
This assumption is excluded by Kawoomba’s “but which I do not care about as much”, so isn’t directly relevant at this point (unless you are making a distinction between “caring” and “utility”, which should be more explicit).
I guess I’m just not sure why Kawoomba’s own utility gets special treatment over the other child’s parents utility function. Then again, your reply and my own sentence just now have me slightly confused, so I may need to think on this a bit more.
I guess I’m just not sure why Kawoomba’s own utility gets special treatment over the other child’s parents utility function.
Taboo “utility function”, and “Kawoomba cares about Kawoomba’s utility function” would resolve into the tautologous “Kawoomba is motivated by whatever it is that motivates Kawoomba”. The subtler problem is that it’s not a given that Kawoomba knows what motivates Kawoomba, so claims with certainty about what that is or isn’t (including those made by Kawoomba) may be unfounded. To the extent “utility function” refers to idealized extrapolated volition, rather than present desires, people won’t already have good understanding of even their own “utility function”.
The subtler problem is that it’s not a given that Kawoomba knows what motivates Kawoomba, so claims with certainty about what that is or isn’t (including those made by Kawoomba) may be unfounded.
There is no idealized extrapolated volition that is based on my current volition that would prefer someone else’s child over one of my own (CEV_me, not CEV_mankind). There are certainly inconsistencies in my non-idealized utility function, but that does not mean that every statement I make about my own utility function must be suspect, merely that such suspect/contradictory statements exist.
If you prefer vanilla over strawberry ice cream, there may be cases where that preference does not transfer to your extrapolated volition due to some other contradictory preferences. However, for comparisons with a significant delta involved, the initial result that determines your decision should be preserved. (It may however be different when extrapolating to a CEV for all humankind.)
Also, you used my name with a frequency of 7⁄84 in your last comment <3.
that does not mean that every statement I make about my own utility function must be suspect
In general, unless something is well-understood, there is good reason to suspect an error. Human values is not something that’s understood particularly well.
Assuming that saving my child would give me X utility and saving the other child would give his parents X utility
If you’ve found a way to aggregate utility across persons, I’d like to hear it.
Normally, we talk about trying to satisfy a particular utility function. If the parent values her child more than the neighbor’s child, that is reflected in her utility function. What other standard are you trying to invoke?
What reason do you have for aiming to satisfy you own utility function
Um, it’s my utility function, that which I aim to maximize and that which already incorporates my e.g. altruistic desires. Postulating “other preferences” that can overrule my utility function would be a contradiction in terms.
The other two questions were more aimed at MugaSofer, who was the one differentiating between preference as a “bias” and as part of your utility function, and who introduced the whole “evil” thing.
The nearest I can come to making sense of your claim is that it’s some sort of imaginary Prisoner’s Dilemma: you can cooperate by saving a random child instead of your own, and in symmetric cases other parents can cooperate by saving your child instead of theirs.
However, even if you are into counterfactual bargaining, I am pretty sure almost no other parent would cooperate here, which makes defecting a no-brainer.
I suppose to be fair I should imagine a world in which every parent is brainwashed into valuing other children’s lives as much as their own (I am pretty sure it would take brainwashing). In this case (assuming you escaped the brainwashing so it’s still a legitimate decision) saving the other child might be the right thing to do. At that point, though, you’re arguably not optimizing for humans anymore.
Assuming that saving my child would give me X utility and saving the other child would give his parents X utility, it’s just a “shut up and multiply” kind of thing...
This assumption is excluded by Kawoomba’s “but which I do not care about as much”, so isn’t directly relevant at this point (unless you are making a distinction between “caring” and “utility”, which should be more explicit).
I guess I’m just not sure why Kawoomba’s own utility gets special treatment over the other child’s parents utility function. Then again, your reply and my own sentence just now have me slightly confused, so I may need to think on this a bit more.
Taboo “utility function”, and “Kawoomba cares about Kawoomba’s utility function” would resolve into the tautologous “Kawoomba is motivated by whatever it is that motivates Kawoomba”. The subtler problem is that it’s not a given that Kawoomba knows what motivates Kawoomba, so claims with certainty about what that is or isn’t (including those made by Kawoomba) may be unfounded. To the extent “utility function” refers to idealized extrapolated volition, rather than present desires, people won’t already have good understanding of even their own “utility function”.
There is no idealized extrapolated volition that is based on my current volition that would prefer someone else’s child over one of my own (CEV_me, not CEV_mankind). There are certainly inconsistencies in my non-idealized utility function, but that does not mean that every statement I make about my own utility function must be suspect, merely that such suspect/contradictory statements exist.
If you prefer vanilla over strawberry ice cream, there may be cases where that preference does not transfer to your extrapolated volition due to some other contradictory preferences. However, for comparisons with a significant delta involved, the initial result that determines your decision should be preserved. (It may however be different when extrapolating to a CEV for all humankind.)
Also, you used my name with a frequency of 7⁄84 in your last comment <3.
In general, unless something is well-understood, there is good reason to suspect an error. Human values is not something that’s understood particularly well.
If you value e.g. your family extremely higher than a grain of salt, would you say that there is any chance of that not being reflected in your CEV?
Any “CEV” that doesn’t conserve e.g. that particular relationship would be misnamed.
If you’ve found a way to aggregate utility across persons, I’d like to hear it.
Normally, we talk about trying to satisfy a particular utility function. If the parent values her child more than the neighbor’s child, that is reflected in her utility function. What other standard are you trying to invoke?
Ah, this clears up things a bit for me, thank you.
Why would I need to aim to satisfy overall utility including others, as opposed to just that of my own family?
Is any such preference that chooses my own utility over that of others a bias, and not part of my utility function?
Is it an evil bias if I buy myself some tech toys as opposed to donating that amount to my preferred charity?
What reason do you have for aiming to satisfy you own utility function, or that of your family’s?
I’m afraid this is a little too much lingo for me. Sorry.
You’d have to taboo “evil” before I can answer this question.
Um, it’s my utility function, that which I aim to maximize and that which already incorporates my e.g. altruistic desires. Postulating “other preferences” that can overrule my utility function would be a contradiction in terms.
The other two questions were more aimed at MugaSofer, who was the one differentiating between preference as a “bias” and as part of your utility function, and who introduced the whole “evil” thing.
The nearest I can come to making sense of your claim is that it’s some sort of imaginary Prisoner’s Dilemma: you can cooperate by saving a random child instead of your own, and in symmetric cases other parents can cooperate by saving your child instead of theirs.
However, even if you are into counterfactual bargaining, I am pretty sure almost no other parent would cooperate here, which makes defecting a no-brainer.
I suppose to be fair I should imagine a world in which every parent is brainwashed into valuing other children’s lives as much as their own (I am pretty sure it would take brainwashing). In this case (assuming you escaped the brainwashing so it’s still a legitimate decision) saving the other child might be the right thing to do. At that point, though, you’re arguably not optimizing for humans anymore.