The question isn’t well-defined. Utility is a measure of value for different states of the world. You can’t just “give x utility”, you have to actually alter some state of the world, so to be meaningful the question needs to be formulated in terms of concrete effects in the world—lives saved, dollars gained, or whatever.
Humans also seem to have bounded utility functions (as far as they can be said to have such at all), so the “1 utility” needs to be defined so that we know how to adjust for our bounds.
I think this kind of criticism makes sense if only if you postulate that there’s some kind of extra, physical restrictions on utilities. Perhaps humans have bounded utility functions, but do all agents? It sure seems like decision theory should be able to handle agents with unbounded utility functions. If this is impossible for some reason, well that’s interesting in it’s own right. To figure out why it’s impossible, we first have to notice our own confusion.
Imagine you’re a papercliper, it’s how many paperclips will be created.
In something more prone to failure but easier to imagine for some, imagine they are sealed boxes, each containing a few thousand unique people having different and meaningful fun together for eternity.
In something more prone to failure but easier to imagine for some, imagine they are sealed boxes, each containing a few thousand unique people having different and meaningful fun together for eternity.
Thanks, this is better.
One approach would be to figure out the magnitude of the implicit risks that I take all the time. E.g. if a friend offers me a car ride that will save me 15 minutes over taking a train, I tend to take the offer, even though death rates in car rides are larger than in regional trains. While I don’t assign death infinite or even maximal negative value (there are obviously many things that would be worse than death), I would very much prefer to avoid it. Whatever the exact probability is for the chance of dying when taking a car, it’s low enough that it meets some cognitive cutoff for “irrelevant”. I would then pick the N that gives the highest expected value while without having a probability low enough that I would ignore it when assessing the risks of every-day life.
I’m not sure of how good this approach is, but at least it’s consistent.
The question isn’t well-defined. Utility is a measure of value for different states of the world. You can’t just “give x utility”, you have to actually alter some state of the world, so to be meaningful the question needs to be formulated in terms of concrete effects in the world—lives saved, dollars gained, or whatever.
Humans also seem to have bounded utility functions (as far as they can be said to have such at all), so the “1 utility” needs to be defined so that we know how to adjust for our bounds.
I think this kind of criticism makes sense if only if you postulate that there’s some kind of extra, physical restrictions on utilities. Perhaps humans have bounded utility functions, but do all agents? It sure seems like decision theory should be able to handle agents with unbounded utility functions. If this is impossible for some reason, well that’s interesting in it’s own right. To figure out why it’s impossible, we first have to notice our own confusion.
Sure, but the question was “what n would you choose”, not “what n would an arbitrary decision-making agent choose”.
Imagine you’re a papercliper, it’s how many paperclips will be created.
In something more prone to failure but easier to imagine for some, imagine they are sealed boxes, each containing a few thousand unique people having different and meaningful fun together for eternity.
Not necessarily. The relationship between clips and utility is positive, not necessarily linear.
Thanks, this is better.
One approach would be to figure out the magnitude of the implicit risks that I take all the time. E.g. if a friend offers me a car ride that will save me 15 minutes over taking a train, I tend to take the offer, even though death rates in car rides are larger than in regional trains. While I don’t assign death infinite or even maximal negative value (there are obviously many things that would be worse than death), I would very much prefer to avoid it. Whatever the exact probability is for the chance of dying when taking a car, it’s low enough that it meets some cognitive cutoff for “irrelevant”. I would then pick the N that gives the highest expected value while without having a probability low enough that I would ignore it when assessing the risks of every-day life.
I’m not sure of how good this approach is, but at least it’s consistent.