Assume my utility function is approximately linear in dollars I lose, dollars my employer loses, and dollars puppies gain at this scale.
If we made that assumption then you’d never stop giving to puppies—whatever you gained by giving $100 you’d gain twice over by giving $200. Assuming that both your and your favorite charity have a lot more money than you, then it’s probably okay to assume that they experience changes in marginal utility which are locally linear in dollars, but at some point you’re going to stop giving because your own utility function went noticeably nonlinear, e.g. that second $100 would have been a bigger loss to you than the first was.
If we made that assumption then you’d never stop giving to puppies—whatever you gained by giving $100 you’d gain twice over by giving $200. Assuming that both your and your favorite charity have a lot more money than you, then it’s probably okay to assume that they experience changes in marginal utility which are locally linear in dollars, but at some point you’re going to stop giving because your own utility function went noticeably nonlinear, e.g. that second $100 would have been a bigger loss to you than the first was.
Right, that’s why I specified “at this scale”… Oh I see it’s not clear the modifier refers to all three resources. Editing. :)