I thought of this dilemma when I was trying to sleep and found it impossible to sleep afterwards, as I couldn’t stop thinking about it. For the rest of the day I had trouble doing anything because I couldn’t stop worrying about it.
I think the problem might be that most people seem to feel safe when discussing these sorts of dilemmas, they’re thinking about them in Far Mode and just consider them interesting intellectual toys. I used to be like that, but in the past couple years something has changed. Now when I consider a dilemma I feel like I’m in actual danger, I feel the sort of mental anguish you’d feel if you actually had to make that choice in real life. I feel like I was actually offered the lifespan dilemma and really do have to choose whether to accept it or not.
I wouldn’t worry about the Lifespan Dilemma affecting most people this way. My family has a history of Obsessive Compulsive Disorder, I’m starting to suspect that I’ve developed the purely obsessional variety. In particular my freakouts match the “religiosity” type of POOCD, except that since I’m an atheist I worry about philosophical and scientific problems rather than religious ones. Others things I’ve freaked out about include:
-Population ethics
-Metaethics
-That maybe various things I enjoy doing are actually as valueless as paperclipping or cheesecaking.
-That maybe I secretly have simple values and want to be wireheaded, even though I know I don’t want to be.
-Malthusian brain emulators
These freakout are always about some big abstract philosophical issue, they are never about anything in my normal day-to-day life. Generally I obsess about one of these things for a few days until I reach some sort of resolution about it. Then I behave normally for a few weeks until I find something new to freak out over. It’s very frustrating because I have a very high happiness set point when I’m not in one of these funks.
Okay, that sounds like it wasn’t primarily the fault of the Lifespan Dilemma as such (and it also doesn’t sound too far from the amount of sleep I lose when nerdsniped by a fascinating new mathematical concept I can’t quite grasp, like Jervell’s ordinal notation).
Look. Simple utilitarianism doesn’t have to be correct. It looks like a wrong idea to me. Often, when reasoning informally, people confabulate wrong formal sounding things that loosely match their intuitions. And then declare that normative.
Is a library of copies of one book worth the same to you? Is a library of books of 1 author worth as much? Does variety ever truly count for nothing? There’s no reason why u(“AB”) should be equal to u(“A”)+u(“B”). People pick + because they are bad at math , or perhaps bad at knowing when they are being bad at math. edit: When you try to math-ize your morality, poor knowledge of math serves as Orwellian newspeak, it defines the way you think. It is hard to choose correct function even if there was any, and years of practice on too simple problems make wrong functions pop into your head.
Interestingly, I discovered the Lifespan Dilemma due to this post. While not facing a total breakdown of my ability to do anything else, it did consume an inordinate amount of my thought process.
The question looks like an optimal betting problem- you have a limited resource, and need to get the most return. According to the Kelly Criterion, the optimal percentage of your total bankroll looks like f*=(p(b-1)+1)/b, where p is the probability of success, and b is the return per unit risked. The interesting thing here is that for very large values of b, the percentage of bankroll to be risked almost exactly equals the percentage chance of winning. Assuming a bankroll of 100 units and a 20 percent chance of success, you should bet the same amount if b = 1 million or if b = 1 trillion: 20 units.
Eager to apply this to the problem at hand, I decided to plug in the numbers. I then realized I didn’t know what the bank roll was in this situation. My first thought was that the bankroll was the expected time left- percent chance of success * time if successful. I think this is the mode that leads to the garden path- every time you increase your time of life if successful, it feels like you have more units to bet with, which means you are willing to spend more on longer odds.
Not satisfied, I attempted to re-frame the question into money. Stating it like this, I have 100$, and in 2 hours I will either have 0$, or 1 million, with an 80% chance of winning. I could trade my 80% chance for a 79% chance of winning 1 trillion. So, now that we are in money, where is my bankroll?
I believe that is the trick- in this question, you are already all in. You have already bet 100% of your bankroll, for an 80% chance of winning- in 2 hours, you will know the outcome of your bet. For extremely high values of b, you should have only bet 80% of your bankroll- you are already underwater. Here is the key point- changing the value of b does not change what you should have bet, or even your bet at all- that’s locked in. All you can change is the probability, and you can only make it worse. From this perspective, you should accept no offer that lowers your probability of winning.
blinks
I didn’t realize the Lifespan Dilemma was a cognitive hazard. How much freakout are we talking about here?
I thought of this dilemma when I was trying to sleep and found it impossible to sleep afterwards, as I couldn’t stop thinking about it. For the rest of the day I had trouble doing anything because I couldn’t stop worrying about it.
I think the problem might be that most people seem to feel safe when discussing these sorts of dilemmas, they’re thinking about them in Far Mode and just consider them interesting intellectual toys. I used to be like that, but in the past couple years something has changed. Now when I consider a dilemma I feel like I’m in actual danger, I feel the sort of mental anguish you’d feel if you actually had to make that choice in real life. I feel like I was actually offered the lifespan dilemma and really do have to choose whether to accept it or not.
I wouldn’t worry about the Lifespan Dilemma affecting most people this way. My family has a history of Obsessive Compulsive Disorder, I’m starting to suspect that I’ve developed the purely obsessional variety. In particular my freakouts match the “religiosity” type of POOCD, except that since I’m an atheist I worry about philosophical and scientific problems rather than religious ones. Others things I’ve freaked out about include:
-Population ethics
-Metaethics
-That maybe various things I enjoy doing are actually as valueless as paperclipping or cheesecaking.
-That maybe I secretly have simple values and want to be wireheaded, even though I know I don’t want to be.
-Malthusian brain emulators
These freakout are always about some big abstract philosophical issue, they are never about anything in my normal day-to-day life. Generally I obsess about one of these things for a few days until I reach some sort of resolution about it. Then I behave normally for a few weeks until I find something new to freak out over. It’s very frustrating because I have a very high happiness set point when I’m not in one of these funks.
Okay, that sounds like it wasn’t primarily the fault of the Lifespan Dilemma as such (and it also doesn’t sound too far from the amount of sleep I lose when nerdsniped by a fascinating new mathematical concept I can’t quite grasp, like Jervell’s ordinal notation).
Look. Simple utilitarianism doesn’t have to be correct. It looks like a wrong idea to me. Often, when reasoning informally, people confabulate wrong formal sounding things that loosely match their intuitions. And then declare that normative.
Is a library of copies of one book worth the same to you? Is a library of books of 1 author worth as much? Does variety ever truly count for nothing? There’s no reason why u(“AB”) should be equal to u(“A”)+u(“B”). People pick + because they are bad at math , or perhaps bad at knowing when they are being bad at math. edit: When you try to math-ize your morality, poor knowledge of math serves as Orwellian newspeak, it defines the way you think. It is hard to choose correct function even if there was any, and years of practice on too simple problems make wrong functions pop into your head.
The lifespan dilemma applies to all unbounded utility functions combined with expected value maximization, it does not require simple utilitarianism.
Interestingly, I discovered the Lifespan Dilemma due to this post. While not facing a total breakdown of my ability to do anything else, it did consume an inordinate amount of my thought process.
The question looks like an optimal betting problem- you have a limited resource, and need to get the most return. According to the Kelly Criterion, the optimal percentage of your total bankroll looks like f*=(p(b-1)+1)/b, where p is the probability of success, and b is the return per unit risked. The interesting thing here is that for very large values of b, the percentage of bankroll to be risked almost exactly equals the percentage chance of winning. Assuming a bankroll of 100 units and a 20 percent chance of success, you should bet the same amount if b = 1 million or if b = 1 trillion: 20 units.
Eager to apply this to the problem at hand, I decided to plug in the numbers. I then realized I didn’t know what the bank roll was in this situation. My first thought was that the bankroll was the expected time left- percent chance of success * time if successful. I think this is the mode that leads to the garden path- every time you increase your time of life if successful, it feels like you have more units to bet with, which means you are willing to spend more on longer odds.
Not satisfied, I attempted to re-frame the question into money. Stating it like this, I have 100$, and in 2 hours I will either have 0$, or 1 million, with an 80% chance of winning. I could trade my 80% chance for a 79% chance of winning 1 trillion. So, now that we are in money, where is my bankroll?
I believe that is the trick- in this question, you are already all in. You have already bet 100% of your bankroll, for an 80% chance of winning- in 2 hours, you will know the outcome of your bet. For extremely high values of b, you should have only bet 80% of your bankroll- you are already underwater. Here is the key point- changing the value of b does not change what you should have bet, or even your bet at all- that’s locked in. All you can change is the probability, and you can only make it worse. From this perspective, you should accept no offer that lowers your probability of winning.