My actual utility, I think, does scale with DALY’s, but my hedons don’t. I’d like my hedons to match my utilons so that I can maximize both at the same time (I prefer by definition to maximize utilons if I have to pick, but this requires willpower).
Er I understand that utility != pleasure, but again, why does your utility scale linearly with DALYs? It seems like the sentiments you’ve expressed so far imply that your (ideal) utility function should not favor your own DALYs over someone else’s DALYs, but I don’t see why that implies a linear overall scaling of utility with DALYs.
If by value you mean “place utility on” then that doesn’t follow. As I said, utility has to do (among many other things) with risk aversion. You could be willing to pay twice as many dollars for twice as many DALYs and yet not place twice as much utility on twice as many DALYs. Assuming that 1 DALY = 1 utilon, then the utility of x DALYs is by definition 1/p, where p is the probability at which you would pay exactly 1 DALY to get x DALYs with probability p.
Again, having all DALYs be equally valuable doesn’t mean that your utility function scales linearly with DALYs, you could have a utility function that is say sqrt(# DALYs) and this would still value all DALYs equally. Although also see Will_Newsome’s comments elsewhere about why talking about things in terms of utility is probably not the best idea anyways.
If by utility you meant something other than VNM utility, then I apologize for the confusion (although as I pointed out elsewhere, I would then take objection to claims that you should maximize its expected value).
I’m afraid my past few comments have been confused. I don’t know as much about my utility function as I wish I did. I think I am allowed to assign positive utility to a change in my utility function, and if so then I want my utility function to be linear in DALYs. It probably is not so already.
I think we may be talking past each other (or else I’m confused). My question for you is whether you would (or wish you would) sacrifice 1 DALY in order to have a 1 in 10^50 chance of creating 1+10^50 DALYs. And if so, then why?
(If my questions are becoming tedious then feel free to ignore them.)
My question for you is whether you would (or wish you would) sacrifice 1 DALY in order to have a 1 in 10^50 chance of creating 1+10^50 DALYs. And if so, then why?
I don’t trust questions involving numbers that large and/or probabilities that small, but I think so, yes.
I’d reverse the importance of those two considerations. Even though my utility doesn’t scale linearly with DALYs, I wish it did.
Why do you wish it did?
My actual utility, I think, does scale with DALY’s, but my hedons don’t. I’d like my hedons to match my utilons so that I can maximize both at the same time (I prefer by definition to maximize utilons if I have to pick, but this requires willpower).
Er I understand that utility != pleasure, but again, why does your utility scale linearly with DALYs? It seems like the sentiments you’ve expressed so far imply that your (ideal) utility function should not favor your own DALYs over someone else’s DALYs, but I don’t see why that implies a linear overall scaling of utility with DALYs.
If I think all DALYs are equally valuable, I should value twice as many twice as much. That’s why I’d prefer it to be linear.
If by value you mean “place utility on” then that doesn’t follow. As I said, utility has to do (among many other things) with risk aversion. You could be willing to pay twice as many dollars for twice as many DALYs and yet not place twice as much utility on twice as many DALYs. Assuming that 1 DALY = 1 utilon, then the utility of x DALYs is by definition 1/p, where p is the probability at which you would pay exactly 1 DALY to get x DALYs with probability p.
Again, having all DALYs be equally valuable doesn’t mean that your utility function scales linearly with DALYs, you could have a utility function that is say sqrt(# DALYs) and this would still value all DALYs equally. Although also see Will_Newsome’s comments elsewhere about why talking about things in terms of utility is probably not the best idea anyways.
If by utility you meant something other than VNM utility, then I apologize for the confusion (although as I pointed out elsewhere, I would then take objection to claims that you should maximize its expected value).
I’m afraid my past few comments have been confused. I don’t know as much about my utility function as I wish I did. I think I am allowed to assign positive utility to a change in my utility function, and if so then I want my utility function to be linear in DALYs. It probably is not so already.
I think we may be talking past each other (or else I’m confused). My question for you is whether you would (or wish you would) sacrifice 1 DALY in order to have a 1 in 10^50 chance of creating 1+10^50 DALYs. And if so, then why?
(If my questions are becoming tedious then feel free to ignore them.)
I don’t trust questions involving numbers that large and/or probabilities that small, but I think so, yes.
Probably good not to trust such number =). But can you share any reasoning or intuition for why the answer is yes?