So you’re suggesting, in my example, a box that approaches 500 utilons over the course of a day, then disappears?
This isn’t even a problem. I just need to have a really good reaction time to open it as close to 24 hours as possible. Although at some point I may decide that the risk of missing the time outweighs the increase in utilons. Anyway this isn’t even a controversial thought experiment in that case.
I thought you would realize I was assuming what I did for the case with the utility function that discounts completely after a certain time: “Suppose you can think as fast as you want, and open the box at arbitrary speed.”
But if your utility function discounts based on the amount of thinking you’ve done, not on time, I can’t think of an analogous trap for that.
So, ideally, these utility functions wouldn’t be arbitrary, but would somehow reflect things people might actually think. So, for example, if the box is only allowed to contain varying amounts of money, I would want to discount based on time (for reasons of investment if nothing else) and also put an upper bound on the utility I get (because at some point you just have so much money you can afford pretty much anything).
When arbitrary utilons get mixed in, it becomes complicated, because I discount different ways to get utility at a different rate. For instance, a cure for cancer would be worthless 50 years from now if people figured out how to cure cancer in the meantime already, at which point you’d total up all the casualties from now until then and discount based on those. This is different from money because even getting a dollar 100 years from now is not entirely pointless.
On the other hand, I don’t think my utility function discounts based on the amount of thinking I’ve done, at least not for money. I want to figure out what my true response to the problem is, in that case (which is basically equivalent to the “You get $X. What do you want X to be?” problem). I think it’s that after I’ve spent a lot of time thinking about it and decided X should be, say, 100 quadrillion, which gets me 499 utilons out of a maximum of 500, then making the decision and not thinking about it more might be worth more than 1 utilon to me.
In a way, yes. I’m trying to cleanly separate the bits of the thought experiment I have an answer to, from the bits of the thought experiment I don’t have an answer to.
This thwarts the original box, but I just edited the OP to describe another box that would get this utility function in trouble.
So you’re suggesting, in my example, a box that approaches 500 utilons over the course of a day, then disappears?
This isn’t even a problem. I just need to have a really good reaction time to open it as close to 24 hours as possible. Although at some point I may decide that the risk of missing the time outweighs the increase in utilons. Anyway this isn’t even a controversial thought experiment in that case.
I thought you would realize I was assuming what I did for the case with the utility function that discounts completely after a certain time: “Suppose you can think as fast as you want, and open the box at arbitrary speed.”
But if your utility function discounts based on the amount of thinking you’ve done, not on time, I can’t think of an analogous trap for that.
So, ideally, these utility functions wouldn’t be arbitrary, but would somehow reflect things people might actually think. So, for example, if the box is only allowed to contain varying amounts of money, I would want to discount based on time (for reasons of investment if nothing else) and also put an upper bound on the utility I get (because at some point you just have so much money you can afford pretty much anything).
When arbitrary utilons get mixed in, it becomes complicated, because I discount different ways to get utility at a different rate. For instance, a cure for cancer would be worthless 50 years from now if people figured out how to cure cancer in the meantime already, at which point you’d total up all the casualties from now until then and discount based on those. This is different from money because even getting a dollar 100 years from now is not entirely pointless.
On the other hand, I don’t think my utility function discounts based on the amount of thinking I’ve done, at least not for money. I want to figure out what my true response to the problem is, in that case (which is basically equivalent to the “You get $X. What do you want X to be?” problem). I think it’s that after I’ve spent a lot of time thinking about it and decided X should be, say, 100 quadrillion, which gets me 499 utilons out of a maximum of 500, then making the decision and not thinking about it more might be worth more than 1 utilon to me.
Now you’re just dodging the thought experiment by changing it.
In a way, yes. I’m trying to cleanly separate the bits of the thought experiment I have an answer to, from the bits of the thought experiment I don’t have an answer to.