Well, here’s an example that makes some sense to me. If I find another, less complicated situation which “should” provoke the same feelings as this situation, that’s progress—and I can! Assume my utility function is approximately linear in dollars I lose at this scale, dollars my employer loses at this scale, and dollars puppies gain at this scale. Then maybe I can attribute to me my loss of $50 and half of my loss of the next $50, half of my employer’s loss of $100, and puppies’ gain of $50 and half of puppies’ gain of $150, saying I should feel about the overall situation similar to how I’d feel if I solely caused a situation where:
I lost $75
My employer lost $50
Puppies gained $125
That’s an example of math that makes a lot of sense to me, regardless of my utility function except for the fairly reasonable linearity-at-small-scales assumption.
Okay, so “how should I feel” means “what is the utility of this scenario” in this context? In that case, you should use the full values, rather than discounting for things that were partially ‘someone else’s responsibility’. If you prefer world-state A to world-state B, you should act so as to cause A to obtain rather than B. The fact that A gets a tag saying “this wasn’t all me—someone helped with this” doesn’t make the difference in utilities any smaller.
Maybe so! It feels very wrong that both I and my employer should feel like we caused me to lose $100, my employer to lose $100, and puppies to gain $200. I mean, suppose there was a third agent who could vote yea or nay to the transaction and flip 10 coins and if they all land heads, that vote decides whether the transaction takes place. Should that agent, voting yea, also feel like she caused me to lose $100, my employer to lose $100, and puppies to gain $200? If yes, woah, if no, why not, and does the same reason affect how I should feel?
Hmm, this makes it more clear to me what I mean by “how should I feel”. I think what I mean is something like “Suppose my brain was trying to build an intuition that gives me feelings about possible actions based on unseen predictions regarding how those actions affect the world, and lots of other brains were also trying to do this. How should these brains interpret that situation so as to not overtrain or undertrain?”
What I don’t want is a result that says “Enter into a contract with 99 other people that none of you donates $100 to puppies unless all of you do, then donate $100, so you can feel like you caused puppies to get $10000!”. Somehow that seems counterproductive. Except as a one-off game for funsies, which doesn’t count. :)
What I don’t want is a result that says “Enter into a contract with 99 other people that none of you donates $100 to puppies unless all of you do, then donate $100, so you can feel like you caused puppies to get $10000!”. Somehow that seems counterproductive. Except as a one-off game for funsies, which doesn’t count. :)
Actually, this isn’t wrong, as long as you think about it the right way. First you are causing puppies to get $10000, but you are also causing 99 people to lose $100, so you have to account for that.
More importantly, though, frame the scenario this way: 99 people already signed the contract, and you still have to decide whether to sign it. Then, you are clearly making the entire difference and you should be willing to accept correspondingly large disutilities if they are necessary to get the deal to happen (unless they could easily find someone else, in which case both the math and the intuition agree that you are not creating as many utilions). Note that the math cannot require everyone to accept individually large disutilities, because then signing the contract would cause all of those disutilies to occur.
If, however they have not signed anything yet, either you know that they are going to and that they cannot find anyone elso to be the 100th person, in which case this is equivalent to the other scenario, or you don’t know whether they are all going to sign it in which case the utility is reduced by the uncertainty and you should no longer accept as large disutilities to sign the contract.
Assume my utility function is approximately linear in dollars I lose, dollars my employer loses, and dollars puppies gain at this scale.
If we made that assumption then you’d never stop giving to puppies—whatever you gained by giving $100 you’d gain twice over by giving $200. Assuming that both your and your favorite charity have a lot more money than you, then it’s probably okay to assume that they experience changes in marginal utility which are locally linear in dollars, but at some point you’re going to stop giving because your own utility function went noticeably nonlinear, e.g. that second $100 would have been a bigger loss to you than the first was.
Well, here’s an example that makes some sense to me. If I find another, less complicated situation which “should” provoke the same feelings as this situation, that’s progress—and I can! Assume my utility function is approximately linear in dollars I lose at this scale, dollars my employer loses at this scale, and dollars puppies gain at this scale. Then maybe I can attribute to me my loss of $50 and half of my loss of the next $50, half of my employer’s loss of $100, and puppies’ gain of $50 and half of puppies’ gain of $150, saying I should feel about the overall situation similar to how I’d feel if I solely caused a situation where:
I lost $75
My employer lost $50
Puppies gained $125
That’s an example of math that makes a lot of sense to me, regardless of my utility function except for the fairly reasonable linearity-at-small-scales assumption.
Okay, so “how should I feel” means “what is the utility of this scenario” in this context? In that case, you should use the full values, rather than discounting for things that were partially ‘someone else’s responsibility’. If you prefer world-state A to world-state B, you should act so as to cause A to obtain rather than B. The fact that A gets a tag saying “this wasn’t all me—someone helped with this” doesn’t make the difference in utilities any smaller.
Maybe so! It feels very wrong that both I and my employer should feel like we caused me to lose $100, my employer to lose $100, and puppies to gain $200. I mean, suppose there was a third agent who could vote yea or nay to the transaction and flip 10 coins and if they all land heads, that vote decides whether the transaction takes place. Should that agent, voting yea, also feel like she caused me to lose $100, my employer to lose $100, and puppies to gain $200? If yes, woah, if no, why not, and does the same reason affect how I should feel?
Hmm, this makes it more clear to me what I mean by “how should I feel”. I think what I mean is something like “Suppose my brain was trying to build an intuition that gives me feelings about possible actions based on unseen predictions regarding how those actions affect the world, and lots of other brains were also trying to do this. How should these brains interpret that situation so as to not overtrain or undertrain?”
What I don’t want is a result that says “Enter into a contract with 99 other people that none of you donates $100 to puppies unless all of you do, then donate $100, so you can feel like you caused puppies to get $10000!”. Somehow that seems counterproductive. Except as a one-off game for funsies, which doesn’t count. :)
Actually, this isn’t wrong, as long as you think about it the right way. First you are causing puppies to get $10000, but you are also causing 99 people to lose $100, so you have to account for that.
More importantly, though, frame the scenario this way: 99 people already signed the contract, and you still have to decide whether to sign it. Then, you are clearly making the entire difference and you should be willing to accept correspondingly large disutilities if they are necessary to get the deal to happen (unless they could easily find someone else, in which case both the math and the intuition agree that you are not creating as many utilions). Note that the math cannot require everyone to accept individually large disutilities, because then signing the contract would cause all of those disutilies to occur.
If, however they have not signed anything yet, either you know that they are going to and that they cannot find anyone elso to be the 100th person, in which case this is equivalent to the other scenario, or you don’t know whether they are all going to sign it in which case the utility is reduced by the uncertainty and you should no longer accept as large disutilities to sign the contract.
If we made that assumption then you’d never stop giving to puppies—whatever you gained by giving $100 you’d gain twice over by giving $200. Assuming that both your and your favorite charity have a lot more money than you, then it’s probably okay to assume that they experience changes in marginal utility which are locally linear in dollars, but at some point you’re going to stop giving because your own utility function went noticeably nonlinear, e.g. that second $100 would have been a bigger loss to you than the first was.
Right, that’s why I specified “at this scale”… Oh I see it’s not clear the modifier refers to all three resources. Editing. :)