Imagine you’re a papercliper, it’s how many paperclips will be created.
In something more prone to failure but easier to imagine for some, imagine they are sealed boxes, each containing a few thousand unique people having different and meaningful fun together for eternity.
In something more prone to failure but easier to imagine for some, imagine they are sealed boxes, each containing a few thousand unique people having different and meaningful fun together for eternity.
Thanks, this is better.
One approach would be to figure out the magnitude of the implicit risks that I take all the time. E.g. if a friend offers me a car ride that will save me 15 minutes over taking a train, I tend to take the offer, even though death rates in car rides are larger than in regional trains. While I don’t assign death infinite or even maximal negative value (there are obviously many things that would be worse than death), I would very much prefer to avoid it. Whatever the exact probability is for the chance of dying when taking a car, it’s low enough that it meets some cognitive cutoff for “irrelevant”. I would then pick the N that gives the highest expected value while without having a probability low enough that I would ignore it when assessing the risks of every-day life.
I’m not sure of how good this approach is, but at least it’s consistent.
Imagine you’re a papercliper, it’s how many paperclips will be created.
In something more prone to failure but easier to imagine for some, imagine they are sealed boxes, each containing a few thousand unique people having different and meaningful fun together for eternity.
Not necessarily. The relationship between clips and utility is positive, not necessarily linear.
Thanks, this is better.
One approach would be to figure out the magnitude of the implicit risks that I take all the time. E.g. if a friend offers me a car ride that will save me 15 minutes over taking a train, I tend to take the offer, even though death rates in car rides are larger than in regional trains. While I don’t assign death infinite or even maximal negative value (there are obviously many things that would be worse than death), I would very much prefer to avoid it. Whatever the exact probability is for the chance of dying when taking a car, it’s low enough that it meets some cognitive cutoff for “irrelevant”. I would then pick the N that gives the highest expected value while without having a probability low enough that I would ignore it when assessing the risks of every-day life.
I’m not sure of how good this approach is, but at least it’s consistent.