Effects on friends/family and possibility of an accident which leaves you crippled but not dead are the objections that exist even without worrying about the domain of your utility function.
The objection that worlds where you die are worse doesn’t seem to apply to cryonics, since “you die” is the default state. I think most people’s objections to cryonics are psychological, and for me at least the thought “I am almost certain I will wake up in a better future” helps overcome that barrier.
… and possibility of an accident which leaves you crippled but not dead are the objections that exist even without worrying about the domain of your utility function
What objections, why are “objections” an interesting topic? Understand what’s actually going on instead. If there is no reason to privilege the worlds where you survive, the whole structure of reasoning about these situations is different, so the domain of your utility function is not “an additional argument to worry about”, it’s a central consideration that fundamentally changes the structure of the problem.
I think most people’s objections to cryonics are psychological, and for me at least the thought “I am almost certain I will wake up in a better future” helps overcome that barrier.
Don’t you dare use self-deception to convince yourself of something you suspect is true! That way lays madness (normal human insanity, that is). If you start believing that you’re almost certain to wake up in a better future for the reason that it makes you believe in cryonics, and not because it’s true (if it’s true), that belief won’t mean what it claims it means, and opting in for cryonics won’t become any better.
When I first read about quantum lotteries, my reasons for rejecting the idea was the above (family, accident). Those were sufficient for me to reject it, and there’s no point in pretending I had better arguments written above my bottom line. That said, I now see your point about how the domain of my utility function changes the problem, and I have edited the article accordingly. I don’t think I had fully internalized the domain-of-utility-function concept. Thank you.
Don’t you dare use self-deception to convince yourself of something you suspect is true!
When I decide to do something, I visualize it succeeding. This is the only way I know of to motivate myself. I appreciate your concerns about tricking myself and I wrote this question in an attempt to discover whether “I’m almost certain to wake up in a better future” actually is true. But if it is, I’m going to go on thinking about it.
When I decide to do something, I visualize it succeeding. This is the only way I know of to motivate myself.
This method won’t allow you to successfully respond to risks, pursue risky strategies, or act impersonally without deceiving yourself (and since you likely can do these things already, what you described in the words I quoted is not the real problem, or in any case not as severe a problem as you say).
Learn to feel expected utility, to be motivated by correctness of a decision (which in turn derives from consequentialist considerations), rather than by confidently anticipated personal experience.
“Feeling Rational” was one of the most valuable articles on LessWrong, for me. But the way I’ve implemented it in my life is along the lines of:
1) Determine the correct course of action via consequentialism considerations
2) Think happy thoughts that will make me as excited about donating my money to an optimal charity online as I previously felt about reading to underprivileged children at the local library.
I’ve always thought of this more along the lines of “forcibly bringing my feelings in line with optimal actions” than as self-deceit.
So in this case, I did some research and considered expected utility and decided signing up for cryonics made sense. But I don’t feel (as some have reported feeling), like I’ve chosen to save my life, or like I’m one of the few sane people in a crazy world. Instead I feel “I’m probably wrong and in ten years I’m really going to regret wasting my money on this.”
When this idea occurred to me, suddenly cryonics felt worth it on an emotional level as well as on a rational level. I could reasonably imagine a future worth living in, and a shot at making it there. Visualizing waking up doesn’t change the expected utility calculations, but it seemed to bring my intuitions in line with the numbers. So I asked if it made sense or if I was making a mistake. The answer, it seems, is that I was making a mistake, and I appreciate your help in figuring that out. But I don’t think my thought process was exceptionally irrational or dangerous.
I wrote this question in an attempt to discover whether “I’m almost certain to wake up in a better future” actually is true. But if it is, I’m going to go on thinking about it.
Manfred’s point in conjunction with wedrifid’s explanation show that it’s either false (including for the reasons you listed), or true in a trivial sense that shouldn’t move you.
Effects on friends/family and possibility of an accident which leaves you crippled but not dead are the objections that exist even without worrying about the domain of your utility function. The objection that worlds where you die are worse doesn’t seem to apply to cryonics, since “you die” is the default state. I think most people’s objections to cryonics are psychological, and for me at least the thought “I am almost certain I will wake up in a better future” helps overcome that barrier.
Wait wait wait...
What objections, why are “objections” an interesting topic? Understand what’s actually going on instead. If there is no reason to privilege the worlds where you survive, the whole structure of reasoning about these situations is different, so the domain of your utility function is not “an additional argument to worry about”, it’s a central consideration that fundamentally changes the structure of the problem.
Don’t you dare use self-deception to convince yourself of something you suspect is true! That way lays madness (normal human insanity, that is). If you start believing that you’re almost certain to wake up in a better future for the reason that it makes you believe in cryonics, and not because it’s true (if it’s true), that belief won’t mean what it claims it means, and opting in for cryonics won’t become any better.
When I first read about quantum lotteries, my reasons for rejecting the idea was the above (family, accident). Those were sufficient for me to reject it, and there’s no point in pretending I had better arguments written above my bottom line. That said, I now see your point about how the domain of my utility function changes the problem, and I have edited the article accordingly. I don’t think I had fully internalized the domain-of-utility-function concept. Thank you.
When I decide to do something, I visualize it succeeding. This is the only way I know of to motivate myself. I appreciate your concerns about tricking myself and I wrote this question in an attempt to discover whether “I’m almost certain to wake up in a better future” actually is true. But if it is, I’m going to go on thinking about it.
This method won’t allow you to successfully respond to risks, pursue risky strategies, or act impersonally without deceiving yourself (and since you likely can do these things already, what you described in the words I quoted is not the real problem, or in any case not as severe a problem as you say).
Learn to feel expected utility, to be motivated by correctness of a decision (which in turn derives from consequentialist considerations), rather than by confidently anticipated personal experience.
“Feeling Rational” was one of the most valuable articles on LessWrong, for me. But the way I’ve implemented it in my life is along the lines of: 1) Determine the correct course of action via consequentialism considerations 2) Think happy thoughts that will make me as excited about donating my money to an optimal charity online as I previously felt about reading to underprivileged children at the local library.
I’ve always thought of this more along the lines of “forcibly bringing my feelings in line with optimal actions” than as self-deceit.
So in this case, I did some research and considered expected utility and decided signing up for cryonics made sense. But I don’t feel (as some have reported feeling), like I’ve chosen to save my life, or like I’m one of the few sane people in a crazy world. Instead I feel “I’m probably wrong and in ten years I’m really going to regret wasting my money on this.”
When this idea occurred to me, suddenly cryonics felt worth it on an emotional level as well as on a rational level. I could reasonably imagine a future worth living in, and a shot at making it there. Visualizing waking up doesn’t change the expected utility calculations, but it seemed to bring my intuitions in line with the numbers. So I asked if it made sense or if I was making a mistake. The answer, it seems, is that I was making a mistake, and I appreciate your help in figuring that out. But I don’t think my thought process was exceptionally irrational or dangerous.
Manfred’s point in conjunction with wedrifid’s explanation show that it’s either false (including for the reasons you listed), or true in a trivial sense that shouldn’t move you.