Your preference already feels “obviously wrong” to me, and I’ll try to explain why. If we imagine that only one world exists, but we don’t know how it will evolve, I wouldn’t take the analogue of your lottery ticket example, and I suspect that you wouldn’t either. The reason that I wouldn’t do this is because I care about the possible future worlds where I would die, despite the fact that I wouldn’t exist there (after very long). I’m not sure what other reason there would be to reject this bet in the single-world case. However, you are saying that you don’t care about the actual future worlds where you die in the many-worlds case, which seems bizarre and inconsistent with what I imagine your preferences would be in the single-world case. It’s possible that I’m wrong about what your preferences would be in the single-world case, but then you’re acting according to the Born rule anyway, and whether the MWI is true doesn’t enter into it.
(EDIT: that last sentence is wrong, you aren’t acting according to the Born rule anyway.)
In regards to my point about discontinuity, it’s worth knowing that to know whether x = 0 or x > 0, you need infinitely precise knowledge of the wave function. It strikes me as unreasonable and off-putting that no finite amount of information about the state of the universe can discern between one universe which you think is totally fantastic and another universe which you think is terrible and awful. That being said, I can imagine someone being unpersuaded by this argument. If you are willing to accept discontinuity, then you get a theory where you are still maximising expected utility with respect to the Born rule, but your utilities can be infinite or infinitesimal.
On a slightly different note, I would highly recommend reading the paper which I linked (most of which I think is comprehensible without a huge amount of technical background), which motivates the axioms you need for the Born rule to work, and dismotivates other decision rules.
EDIT: Also, I’m sorry about the “sort of thing which is liable to lead to crazy behaviour” thing, it was a long comment and my computer had already crashed once in the middle of composing it, so I really didn’t want to write more.
I downloaded the paper you linked to and will read it shortly. I’m totally sympathetic to the “didn’t want to make a long comment longer” excuse, having felt that way many times myself.
I agree in the single-world case, I wouldn’t want to do it. That’s not because I care about the single world without me per se (as in caring for the people in the world), but because I care about myself who would not exist with ~1 probability. In a multiverse, I still exist with ~1 probability. You can argue that I can’t know for sure that I live in a multiverse, which is one of the reasons I’m still alive in your world (the main reason being it’s not practical for me right now, and I’m not really confident enough to bother researching and setting something like that up.) However, you also don’t know that anything you do is safe, by which I mean things like driving, walking outside, etc. (I’d say those things are far more rational in a multiverse, anyway, but even people who believe in single world still do these things.)
Another reason I don’t have a problem with discontinuity is that the whole problem seems only to arise when you have an infinite number of worlds, and I just don’t feel like that argument is convincing.
I don’t think you need infinite knowledge to know whether x=0 or x>0, especially if you give some probability to higher level multiverses. You don’t need to know for sure that x>0 (as you can’t know anyway), but you can have 99.9% confidence that x>0 rather easily, conditional on MWI being true. As I explained, that is enough to take risks.
If I wake up after, in my case that I laid out, that would mean that I won, as I specified I would be killed while asleep. I could even specify that the entire lotto picking,noise generation, and checking is done while I sleep, so I don’t have to worry about it. That said, I don’t think the question of my subjective expectation of no longer existing is well-defined, because I don’t have a subjective experience if I no longer exist. If am cloned, then told one of me is going to be vaporized without any further notice, and it happens fast enough not to have them feel anything, then my subjective expectation is 100% to survive. That’s different from the torture case you mentioned above, where I expect to survive, and have subjective experiences. I think we do have some more fundamental disagreement about anthropics, which I don’t want to argue over until I hash out my viewpoint more. (Incidentally, it seemed to me that Eliezer agrees with me at least partly, from what he writes in http://lesswrong.com/lw/14h/the_hero_with_a_thousand_chances/:
“What would happen if the Dust won?” asked the hero. “Would the whole world be destroyed in a single breath?”
Aerhien’s brow quirked ever so slightly. “No,” she said serenely. Then, because the question was strange enough to demand a longer answer: “The Dust expands slowly, using territory before destroying it; it enslaves people to its service, before slaying them. The Dust is patient in its will to destruction.”
The hero flinched, then bowed his head. “I suppose that was too much to hope for; there wasn’t really any reason to hope, except hope… it’s not required by the logic of the situation, alas...”
I interpreted that as saying that you can only rely on the anthropic principle (and super quantum psychic powers), if you die without pain.)
I’m actually planning to write a post about Big Worlds, anthropics, and some other topics, but I’ve got other things and am continuously putting it off. Eventually. I’d ideally like to finish some anthropics books and papers, including Bostrom’s, first.
Another, more concise way of putting my troubles with discontinuity: I think that your utility function over universes should be a computable function, and the computable functions are continuous.
Also—what, you have better things to do with your time than read long academic papers about philosophy of physics right now because an internet stranger told you to?!
In the single-world case, I wouldn’t want to do it. That’s not because I care about the single world without me per se (as in caring for the people in the world), but because I care about myself who would not exist with ~1 probability.
Here’s the thing: you obviously think that you dying is a bad thing. You apparently like living. Even if the probability were 20-80 of you dying, I imagine you still wouldn’t take the bet (in the single-world case) if the reward were only a few dollars, even though you would likely survive. This indicates that you care about possible futures where you don’t exist—not in the sense that you care about people in those futures, but that you count those futures in your decision algorithm, and weigh them negatively. By analogy, I think you should care about branches where you die—not in the sense that you care about the welfare of the people in them, but that you should take those branches into account in your decision algorithm, and weigh them negatively.
Another reason I don’t have a problem with discontinuity is that the whole problem seems only to arise when you have an infinite number of worlds, and I just don’t feel like that argument is convincing.
I’m not sure what you can mean by this comment, especially “the whole problem”. My arguments against discontinuity still apply even if you only have a superposition of two worlds, one with amplitude sqrt(x) and another with amplitude sqrt(1-x).
I don’t think you need infinite knowledge to know whether x=0 or x>0, especially if you give some probability to higher level multiverses.
… I promise that you aren’t going to be able to perform a test on a qubit alpha|0ranglebeta|1rangle that you can expect to tell you with 100% certainty that alpha=0, even if you have multiple identical qubits.
You don’t need to know for sure that x>0 (as you can’t know anyway), but you can have 99.9% confidence that x>0 rather easily, conditional on MWI being true. As I explained, that is enough to take risks.
This wasn’t my point. My point was that your preferences make huge value distinctions between universes that are almost identical (and in fact arbitrarily close to identical). Even though your value function is technically a function of the physical state of the universe, it’s like it may as well not be, because arbitrary amounts of knowledge about the physical state of the universe still can’t distinguish between types of universes which you value very different amounts. This intuitively seems irrational and crazy to me in and of itself, but YMMV.
If I wake up after, in my case that I laid out, that would mean that I won, as I specified I would be killed while asleep. I could even specify that the entire lotto picking, noise generation, and checking is done while I sleep, so I don’t have to worry about it.
I find it highly implausible that this should make a difference for your decision algorithm. Imagine that you could extend your life in all branches by a few seconds in which you are totally blissful. I imagine that this would be a pleasant change, and therefore preferable. You can then contemplate what will happen next in your pleasant state, and if my arguments go through, this would mean that your original decision was bad. So, we have a situation where you used to prefer taking the bet to not taking the bet, but when we made the bet sweeter, you know prefer not taking the bet. This seems irrational.
That said, I don’t think the question of my subjective expectation of no longer existing is well-defined, because I don’t have a subjective experience if I no longer exist.
I think it is actually well-defined? Right now, even if I were told that no multiverse exists, I would be pretty sure that I would continue living, even though I wouldn’t be having experiences if I were dead. I think the problem here is that you are confusing my invocation of subjective probabilities (while you’re pondering what will happen next in your branch) of what will objectively happen next with a statement about subjective experiences later.
I think we do have some more fundamental disagreement about anthropics, which I don’t want to argue over until I hash out my viewpoint more.
I would be interested in reading your viewpoints about anthropics, should you publish them. That being said, given that you don’t take the suicide bet in the single-world case, I think that we probably don’t.
Your preference already feels “obviously wrong” to me, and I’ll try to explain why. If we imagine that only one world exists, but we don’t know how it will evolve, I wouldn’t take the analogue of your lottery ticket example, and I suspect that you wouldn’t either. The reason that I wouldn’t do this is because I care about the possible future worlds where I would die, despite the fact that I wouldn’t exist there (after very long). I’m not sure what other reason there would be to reject this bet in the single-world case. However, you are saying that you don’t care about the actual future worlds where you die in the many-worlds case, which seems bizarre and inconsistent with what I imagine your preferences would be in the single-world case. It’s possible that I’m wrong about what your preferences would be in the single-world case, but then you’re acting according to the Born rule anyway, and whether the MWI is true doesn’t enter into it.
(EDIT: that last sentence is wrong, you aren’t acting according to the Born rule anyway.)
In regards to my point about discontinuity, it’s worth knowing that to know whether x = 0 or x > 0, you need infinitely precise knowledge of the wave function. It strikes me as unreasonable and off-putting that no finite amount of information about the state of the universe can discern between one universe which you think is totally fantastic and another universe which you think is terrible and awful. That being said, I can imagine someone being unpersuaded by this argument. If you are willing to accept discontinuity, then you get a theory where you are still maximising expected utility with respect to the Born rule, but your utilities can be infinite or infinitesimal.
On a slightly different note, I would highly recommend reading the paper which I linked (most of which I think is comprehensible without a huge amount of technical background), which motivates the axioms you need for the Born rule to work, and dismotivates other decision rules.
EDIT: Also, I’m sorry about the “sort of thing which is liable to lead to crazy behaviour” thing, it was a long comment and my computer had already crashed once in the middle of composing it, so I really didn’t want to write more.
I downloaded the paper you linked to and will read it shortly. I’m totally sympathetic to the “didn’t want to make a long comment longer” excuse, having felt that way many times myself.
I agree in the single-world case, I wouldn’t want to do it. That’s not because I care about the single world without me per se (as in caring for the people in the world), but because I care about myself who would not exist with ~1 probability. In a multiverse, I still exist with ~1 probability. You can argue that I can’t know for sure that I live in a multiverse, which is one of the reasons I’m still alive in your world (the main reason being it’s not practical for me right now, and I’m not really confident enough to bother researching and setting something like that up.) However, you also don’t know that anything you do is safe, by which I mean things like driving, walking outside, etc. (I’d say those things are far more rational in a multiverse, anyway, but even people who believe in single world still do these things.)
Another reason I don’t have a problem with discontinuity is that the whole problem seems only to arise when you have an infinite number of worlds, and I just don’t feel like that argument is convincing.
I don’t think you need infinite knowledge to know whether x=0 or x>0, especially if you give some probability to higher level multiverses. You don’t need to know for sure that x>0 (as you can’t know anyway), but you can have 99.9% confidence that x>0 rather easily, conditional on MWI being true. As I explained, that is enough to take risks.
If I wake up after, in my case that I laid out, that would mean that I won, as I specified I would be killed while asleep. I could even specify that the entire lotto picking,noise generation, and checking is done while I sleep, so I don’t have to worry about it. That said, I don’t think the question of my subjective expectation of no longer existing is well-defined, because I don’t have a subjective experience if I no longer exist. If am cloned, then told one of me is going to be vaporized without any further notice, and it happens fast enough not to have them feel anything, then my subjective expectation is 100% to survive. That’s different from the torture case you mentioned above, where I expect to survive, and have subjective experiences. I think we do have some more fundamental disagreement about anthropics, which I don’t want to argue over until I hash out my viewpoint more. (Incidentally, it seemed to me that Eliezer agrees with me at least partly, from what he writes in http://lesswrong.com/lw/14h/the_hero_with_a_thousand_chances/:
I interpreted that as saying that you can only rely on the anthropic principle (and super quantum psychic powers), if you die without pain.)
I’m actually planning to write a post about Big Worlds, anthropics, and some other topics, but I’ve got other things and am continuously putting it off. Eventually. I’d ideally like to finish some anthropics books and papers, including Bostrom’s, first.
Another, more concise way of putting my troubles with discontinuity: I think that your utility function over universes should be a computable function, and the computable functions are continuous.
Also—what, you have better things to do with your time than read long academic papers about philosophy of physics right now because an internet stranger told you to?!
Here’s the thing: you obviously think that you dying is a bad thing. You apparently like living. Even if the probability were 20-80 of you dying, I imagine you still wouldn’t take the bet (in the single-world case) if the reward were only a few dollars, even though you would likely survive. This indicates that you care about possible futures where you don’t exist—not in the sense that you care about people in those futures, but that you count those futures in your decision algorithm, and weigh them negatively. By analogy, I think you should care about branches where you die—not in the sense that you care about the welfare of the people in them, but that you should take those branches into account in your decision algorithm, and weigh them negatively.
I’m not sure what you can mean by this comment, especially “the whole problem”. My arguments against discontinuity still apply even if you only have a superposition of two worlds, one with amplitude sqrt(x) and another with amplitude sqrt(1-x).
… I promise that you aren’t going to be able to perform a test on a qubit alpha|0rangle beta|1rangle that you can expect to tell you with 100% certainty that alpha=0, even if you have multiple identical qubits.
This wasn’t my point. My point was that your preferences make huge value distinctions between universes that are almost identical (and in fact arbitrarily close to identical). Even though your value function is technically a function of the physical state of the universe, it’s like it may as well not be, because arbitrary amounts of knowledge about the physical state of the universe still can’t distinguish between types of universes which you value very different amounts. This intuitively seems irrational and crazy to me in and of itself, but YMMV.
I find it highly implausible that this should make a difference for your decision algorithm. Imagine that you could extend your life in all branches by a few seconds in which you are totally blissful. I imagine that this would be a pleasant change, and therefore preferable. You can then contemplate what will happen next in your pleasant state, and if my arguments go through, this would mean that your original decision was bad. So, we have a situation where you used to prefer taking the bet to not taking the bet, but when we made the bet sweeter, you know prefer not taking the bet. This seems irrational.
I think it is actually well-defined? Right now, even if I were told that no multiverse exists, I would be pretty sure that I would continue living, even though I wouldn’t be having experiences if I were dead. I think the problem here is that you are confusing my invocation of subjective probabilities (while you’re pondering what will happen next in your branch) of what will objectively happen next with a statement about subjective experiences later.
I would be interested in reading your viewpoints about anthropics, should you publish them. That being said, given that you don’t take the suicide bet in the single-world case, I think that we probably don’t.