more importantly, your utility probably doesn’t scale linearly with DALYs, if for no other reason than that you don’t care very much about things that happen at very low probabilities
My life satisfaction certainly does not scale linearly with DALYs (e.g. averting the destruction of 1000 DALYs does not make me ten times as happy as averting the destruction of 100 DALYs). but does seem to be very much influenced by whether I have a sense that I’m “doing the right thing” (whatever that means).
But maybe you mean utility in some other sense than life satisfaction.
If I had the choice of pushing one of 10 buttons, each of which had different distribution of probabilities attached to magnitudes of impact, I think I would push the aggregate utility maximizing one regardless of how small the probabilities were. Would this run against my values? Maybe, I’m not sure.
less importantly, abstract arguments are much less likely to be correct than they seem at face value. Likelihood of correctness decreases exponentially in both argument length and amount of abstraction, and it is hard for us to appreciate that intuitively.
I agree; I’ve been trying to formulate this intuition in quasi-rigorous terms and have not yet succeeded in doing so.
Well I am talking about the utility defined in the VNM utility theorem, which I assumed is what the term was generally taken to mean on LW, but perhaps I am mistaken. If you mean something else by utility, then I’m unsure why you would “push the aggregate utility maximizing one” as that choice seems a bit arbitrary to me to be a hard and fast rule (except for VNM utility, since VNM utility is by definition the thing whose expected value you maximize).
Would you care to share your intuitions as to why you would push the utility maximizing button, and what you mean by utility in this case (a partial definition / example is fine if you don’t have a precise definition).
Thanks.
My life satisfaction certainly does not scale linearly with DALYs (e.g. averting the destruction of 1000 DALYs does not make me ten times as happy as averting the destruction of 100 DALYs). but does seem to be very much influenced by whether I have a sense that I’m “doing the right thing” (whatever that means).
But maybe you mean utility in some other sense than life satisfaction.
If I had the choice of pushing one of 10 buttons, each of which had different distribution of probabilities attached to magnitudes of impact, I think I would push the aggregate utility maximizing one regardless of how small the probabilities were. Would this run against my values? Maybe, I’m not sure.
I agree; I’ve been trying to formulate this intuition in quasi-rigorous terms and have not yet succeeded in doing so.
Well I am talking about the utility defined in the VNM utility theorem, which I assumed is what the term was generally taken to mean on LW, but perhaps I am mistaken. If you mean something else by utility, then I’m unsure why you would “push the aggregate utility maximizing one” as that choice seems a bit arbitrary to me to be a hard and fast rule (except for VNM utility, since VNM utility is by definition the thing whose expected value you maximize).
Would you care to share your intuitions as to why you would push the utility maximizing button, and what you mean by utility in this case (a partial definition / example is fine if you don’t have a precise definition).