Assuming for a moment that Everett’s interpretation is correct, there will eventually be a way to very confidently deduce this (and time, identity and consciousness work pretty much like described by Drescher IIRC—there is no continuation of consciousness, just memories, and nothing meaningful separates your identity from your copies):
Should beings/societies/systems clever enough to figure this out (and with something like preferences or values) just seek to self-destruct if they find themselves in a sufficiently suboptimal branch, suffering or otherwise worse off than they plausibly could be? Committing to give up in case things go awry would lessen the impact of setbacks and increase the proportion of branches where everything is stellar, just due to good luck. Keep the best worlds, discard the rest, avoid a lot of hassle.
This is obviously not applicable to e.g. humanity as it is, where self-destruction on any level is inconvenient, if at all possible, and generally not a nice thing to do. But would it theoretically make sense for intelligences like this to develop, and maybe even have an overwhelming tendency to develop in the long term? What if this is one of the vast amount of branches where everyone in the observable universe pretty much failed to have a good enough time and a bright enough future and just offed themselves before interstellar travel etc., because a sufficiently advanced civilization sees it’s just not a big deal in an Everett multiverse?
(There’s probably a lot that I’ve missed here as I have no deep knowledge regarding the MWI, and my reading history so far only touches on this kind of stuff in general, but yay stupid questions thread.)
Should beings/societies/systems clever enough to figure this out (and with something like preferences or values) just seek to self-destruct if they find themselves in a sufficiently suboptimal branch, suffering or otherwise worse off than they plausibly could be?
Not really. If you’re in a suboptimal branch, but still doing better than if you didn’t exist at all, then you aren’t making the world better off by self-destructing regardless of whether other branches exist.
Committing to give up in case things go awry would lessen the impact of setbacks and increase the proportion of branches where everything is stellar, just due to good luck. Keep the best worlds, discard the rest, avoid a lot of hassle.
It would not increase the proportion (technically, you want to be talking about measure here, but the distinction isn’t important for this particular discussion) of branches where everything is stellar—just the proportion of branches where everything is stellar out of the total proportion of branches where you are alive, which isn’t so important. To see this, imagine you have two branches, one where things are going poorly and one where things are going great. The proportion of branches where things are going stellar is 1⁄2. Now suppose that the being/society/system that is going poorly self-destructs. The proportion of branches where things are going stellar is still 1⁄2, but now you have a branch where instead of having a being/society/system that is going poorly, you have no being/society/system at all.
Thanks! Ah, I’m probably just typical-minding like there’s no tomorrow, but I find it inconceivable to place much value on the amount of branches you exist in. The perceived continuation of your consciousness will still go on as long as there are beings with your memories in some branch: in general, it seems to me that if you say you “want to keep living”, you mean you want there to be copies of you in some or the possible futures, waking up the next morning doing stuff present-you would have done, recalling what present-you thought yesterday, and so on (in addition you will probably want a low probability for this future to include significant suffering). Likewise, if you say you “want to see humanity flourish indefinitely”, you want a future that includes your biological or cultural peers and offspring colonizing space and all that, remembering and cherishing many of the values you once had (sans significant suffering).
To me it seems impossible to assign value to the amount of MWI-copies of you, not least because there is no way you could even conceive their number, or usually make meaningful ethical decisions where you weigh their amounts.* Instead, what matters overwhelmingly more is the probability of any given copy living a high quality life.
just the proportion of branches where everything is stellar out of the total proportion of branches where you are alive
Yes, this is obvious of course. What I meant was exactly this, because from the point of view of a set of observers, eliminating the set of observers from a branch ⇔ rendering the branch irrelevant, pretty much.
which isn’t so important.
To me it did feel like this is obviously what’s important, and the branches where you don’t exist simply don’t matter—there’s no one there to observe anything after all, or judge the lack of you to be a loss or morally bad (again, not applicable to individual humans).
If I learned today that I have a 1% chance to develop a maybe-terminal, certainly suffering-causing cancer tomorrow, and I could press a button to just eliminate the branches where that happens, I would not have thought I am committing a moral atrocy. I would not feel like I am killing myself just because part of my future copies never get to exist, nor would I feel bad for the copies of the rest of all people—no one would ever notice anything, vast amounts of future copies of current people would wake up just like they thought they would the next morning, and carry on with their lives and aspirations. But this is certainly something I should learn to understand better before anyone gives me a world-destroying cancer cure button.
*Which is one main difference when comparing this to regular old population ethics, I suppose.
To me it seems impossible to assign value to the amount of MWI-copies of you, not least because there is no way you could even conceive their number, or usually make meaningful ethical decisions where you weigh their amounts.
As it happens, you totally can (it’s called the Born measure, and it’s the same number as what people used to think was the probabilities of different branches occurring), and agents that satisfy sane decision-theoretic criteria weight branches by their Born measure—see this paper for the details.
I would not feel like I am killing myself just because part of my future copies never get to exist, nor would I feel bad for the copies of the rest of all people—no one would ever notice anything, vast amounts of future copies of current people would wake up just like they thought they would the next morning, and carry on with their lives and aspirations.
This is a good place to strengthen intuition, since if you replace “killing myself” with “torturing myself”, it’s still true that none of your future selves who remain alive/untortured “would ever notice anything, vast amounts of future copies of [yourself] would wake up just like they thought they would the nloext morning, and carry on with their lives and aspirations”. If you arrange for yourself to be tortured in some branches and not others, you wake up just as normal and live an ordinary, fulfilling life—but you also wake up and get tortured. Similarly, if you arrange for yourself to be killed in some branches and not others, you wake up just as normal and live an ordinary, fulfilling life—but you also get killed (which is presumably a bad thing even or especially if everybody else also dies).
One way to intuitively see that this way of thinking is going to get you in trouble is to note that your preferences, as stated, aren’t continuous as a function of reality. You’re saying that universes where (1-x) proportion of branches feature you being dead and x proportion of branches feature you being alive are all equally fine for all x > 0, but that a universe where you are dead with proportion 1 and alive with proportion 0 would be awful (well, you didn’t actually say that, but otherwise you would be fine with killing some of your possible future selves in a classical universe). However, there is basically no difference between a universe where (1-epsilon) proportion of branches feature you being dead and epsilon proportion of branches feature you being alive, and a universe where 1 proportion of branches feature you being dead and 0 proportion of branches feature you being alive (since don’t forget, MWI looks like a superposition of waves, not a collection of separate universes). This is the sort of thing which is liable to lead to crazy behaviour.
I’m sorry, but “sort of thing which is liable to lead to crazy behaviour” won’t cut it. Could you give an example of crazy behaviour with this preference ordering? I still think this approach (not counting measure as long as some of me exists) feels right and is what I want. I’m not too worried about discontinuity at only x=0 (and if you look at larger multiverses, x probably never equals 0.)
To argue over a specific example: if I set up something that chooses a number randomly with quantum noise, then buys a lottery ticket, then kills me (in my sleep) only if the ticket doesn’t win, then I assign positive utility to turning the machine on. (Assuming I don’t give a damn about the rest of the world who will have to manage without me.) Can you turn this into either an incoherent preference, or an obviously wrong preference?
(Personally, I’ve thought about the TDT argument for not doing that; because you don’t want everyone else to do it and create worlds in which only 1 person who would do it is left in each, but I’m not convinced that there are a significant number of people who would follow my decision on this. If I ever meet someone like that, I might team up with them to ensure we’d both end up in the same world. I haven’t seen any analysis of TDT/anthropics applied to this problem, perhaps because other people care more about the world?)
Another way to look at it is this: imagine you wake up after the bet, and don’t yet know whether you are going to quickly be killed or whether you are about to recieve a large cash prize. It turns out that your subjective credence for which branch you are in is given by the Born measure. Therefore, (assuming that not taking the bet maximises expected utility in the single-world case), you’re going to wish that you hadn’t taken the bet immediately after taking it, without learning anything new or changing your mind about anything. Thus, your preferences as stated either involve weird time inconsistencies, or care about whether there’s a tiny sliver of time between the worlds branching off and being killed. At any rate, in any practical situation, that tiny sliver of time is going to exist, so if you don’t want to immediately regret your decision, you should maximise expected utility with respect to the Born measure, and not discount worlds where you die.
Your preference already feels “obviously wrong” to me, and I’ll try to explain why. If we imagine that only one world exists, but we don’t know how it will evolve, I wouldn’t take the analogue of your lottery ticket example, and I suspect that you wouldn’t either. The reason that I wouldn’t do this is because I care about the possible future worlds where I would die, despite the fact that I wouldn’t exist there (after very long). I’m not sure what other reason there would be to reject this bet in the single-world case. However, you are saying that you don’t care about the actual future worlds where you die in the many-worlds case, which seems bizarre and inconsistent with what I imagine your preferences would be in the single-world case. It’s possible that I’m wrong about what your preferences would be in the single-world case, but then you’re acting according to the Born rule anyway, and whether the MWI is true doesn’t enter into it.
(EDIT: that last sentence is wrong, you aren’t acting according to the Born rule anyway.)
In regards to my point about discontinuity, it’s worth knowing that to know whether x = 0 or x > 0, you need infinitely precise knowledge of the wave function. It strikes me as unreasonable and off-putting that no finite amount of information about the state of the universe can discern between one universe which you think is totally fantastic and another universe which you think is terrible and awful. That being said, I can imagine someone being unpersuaded by this argument. If you are willing to accept discontinuity, then you get a theory where you are still maximising expected utility with respect to the Born rule, but your utilities can be infinite or infinitesimal.
On a slightly different note, I would highly recommend reading the paper which I linked (most of which I think is comprehensible without a huge amount of technical background), which motivates the axioms you need for the Born rule to work, and dismotivates other decision rules.
EDIT: Also, I’m sorry about the “sort of thing which is liable to lead to crazy behaviour” thing, it was a long comment and my computer had already crashed once in the middle of composing it, so I really didn’t want to write more.
I downloaded the paper you linked to and will read it shortly. I’m totally sympathetic to the “didn’t want to make a long comment longer” excuse, having felt that way many times myself.
I agree in the single-world case, I wouldn’t want to do it. That’s not because I care about the single world without me per se (as in caring for the people in the world), but because I care about myself who would not exist with ~1 probability. In a multiverse, I still exist with ~1 probability. You can argue that I can’t know for sure that I live in a multiverse, which is one of the reasons I’m still alive in your world (the main reason being it’s not practical for me right now, and I’m not really confident enough to bother researching and setting something like that up.) However, you also don’t know that anything you do is safe, by which I mean things like driving, walking outside, etc. (I’d say those things are far more rational in a multiverse, anyway, but even people who believe in single world still do these things.)
Another reason I don’t have a problem with discontinuity is that the whole problem seems only to arise when you have an infinite number of worlds, and I just don’t feel like that argument is convincing.
I don’t think you need infinite knowledge to know whether x=0 or x>0, especially if you give some probability to higher level multiverses. You don’t need to know for sure that x>0 (as you can’t know anyway), but you can have 99.9% confidence that x>0 rather easily, conditional on MWI being true. As I explained, that is enough to take risks.
If I wake up after, in my case that I laid out, that would mean that I won, as I specified I would be killed while asleep. I could even specify that the entire lotto picking,noise generation, and checking is done while I sleep, so I don’t have to worry about it. That said, I don’t think the question of my subjective expectation of no longer existing is well-defined, because I don’t have a subjective experience if I no longer exist. If am cloned, then told one of me is going to be vaporized without any further notice, and it happens fast enough not to have them feel anything, then my subjective expectation is 100% to survive. That’s different from the torture case you mentioned above, where I expect to survive, and have subjective experiences. I think we do have some more fundamental disagreement about anthropics, which I don’t want to argue over until I hash out my viewpoint more. (Incidentally, it seemed to me that Eliezer agrees with me at least partly, from what he writes in http://lesswrong.com/lw/14h/the_hero_with_a_thousand_chances/:
“What would happen if the Dust won?” asked the hero. “Would the whole world be destroyed in a single breath?”
Aerhien’s brow quirked ever so slightly. “No,” she said serenely. Then, because the question was strange enough to demand a longer answer: “The Dust expands slowly, using territory before destroying it; it enslaves people to its service, before slaying them. The Dust is patient in its will to destruction.”
The hero flinched, then bowed his head. “I suppose that was too much to hope for; there wasn’t really any reason to hope, except hope… it’s not required by the logic of the situation, alas...”
I interpreted that as saying that you can only rely on the anthropic principle (and super quantum psychic powers), if you die without pain.)
I’m actually planning to write a post about Big Worlds, anthropics, and some other topics, but I’ve got other things and am continuously putting it off. Eventually. I’d ideally like to finish some anthropics books and papers, including Bostrom’s, first.
Another, more concise way of putting my troubles with discontinuity: I think that your utility function over universes should be a computable function, and the computable functions are continuous.
Also—what, you have better things to do with your time than read long academic papers about philosophy of physics right now because an internet stranger told you to?!
In the single-world case, I wouldn’t want to do it. That’s not because I care about the single world without me per se (as in caring for the people in the world), but because I care about myself who would not exist with ~1 probability.
Here’s the thing: you obviously think that you dying is a bad thing. You apparently like living. Even if the probability were 20-80 of you dying, I imagine you still wouldn’t take the bet (in the single-world case) if the reward were only a few dollars, even though you would likely survive. This indicates that you care about possible futures where you don’t exist—not in the sense that you care about people in those futures, but that you count those futures in your decision algorithm, and weigh them negatively. By analogy, I think you should care about branches where you die—not in the sense that you care about the welfare of the people in them, but that you should take those branches into account in your decision algorithm, and weigh them negatively.
Another reason I don’t have a problem with discontinuity is that the whole problem seems only to arise when you have an infinite number of worlds, and I just don’t feel like that argument is convincing.
I’m not sure what you can mean by this comment, especially “the whole problem”. My arguments against discontinuity still apply even if you only have a superposition of two worlds, one with amplitude sqrt(x) and another with amplitude sqrt(1-x).
I don’t think you need infinite knowledge to know whether x=0 or x>0, especially if you give some probability to higher level multiverses.
… I promise that you aren’t going to be able to perform a test on a qubit alpha|0ranglebeta|1rangle that you can expect to tell you with 100% certainty that alpha=0, even if you have multiple identical qubits.
You don’t need to know for sure that x>0 (as you can’t know anyway), but you can have 99.9% confidence that x>0 rather easily, conditional on MWI being true. As I explained, that is enough to take risks.
This wasn’t my point. My point was that your preferences make huge value distinctions between universes that are almost identical (and in fact arbitrarily close to identical). Even though your value function is technically a function of the physical state of the universe, it’s like it may as well not be, because arbitrary amounts of knowledge about the physical state of the universe still can’t distinguish between types of universes which you value very different amounts. This intuitively seems irrational and crazy to me in and of itself, but YMMV.
If I wake up after, in my case that I laid out, that would mean that I won, as I specified I would be killed while asleep. I could even specify that the entire lotto picking, noise generation, and checking is done while I sleep, so I don’t have to worry about it.
I find it highly implausible that this should make a difference for your decision algorithm. Imagine that you could extend your life in all branches by a few seconds in which you are totally blissful. I imagine that this would be a pleasant change, and therefore preferable. You can then contemplate what will happen next in your pleasant state, and if my arguments go through, this would mean that your original decision was bad. So, we have a situation where you used to prefer taking the bet to not taking the bet, but when we made the bet sweeter, you know prefer not taking the bet. This seems irrational.
That said, I don’t think the question of my subjective expectation of no longer existing is well-defined, because I don’t have a subjective experience if I no longer exist.
I think it is actually well-defined? Right now, even if I were told that no multiverse exists, I would be pretty sure that I would continue living, even though I wouldn’t be having experiences if I were dead. I think the problem here is that you are confusing my invocation of subjective probabilities (while you’re pondering what will happen next in your branch) of what will objectively happen next with a statement about subjective experiences later.
I think we do have some more fundamental disagreement about anthropics, which I don’t want to argue over until I hash out my viewpoint more.
I would be interested in reading your viewpoints about anthropics, should you publish them. That being said, given that you don’t take the suicide bet in the single-world case, I think that we probably don’t.
Assuming for a moment that Everett’s interpretation is correct, there will eventually be a way to very confidently deduce this (and time, identity and consciousness work pretty much like described by Drescher IIRC—there is no continuation of consciousness, just memories, and nothing meaningful separates your identity from your copies):
Should beings/societies/systems clever enough to figure this out (and with something like preferences or values) just seek to self-destruct if they find themselves in a sufficiently suboptimal branch, suffering or otherwise worse off than they plausibly could be? Committing to give up in case things go awry would lessen the impact of setbacks and increase the proportion of branches where everything is stellar, just due to good luck. Keep the best worlds, discard the rest, avoid a lot of hassle.
This is obviously not applicable to e.g. humanity as it is, where self-destruction on any level is inconvenient, if at all possible, and generally not a nice thing to do. But would it theoretically make sense for intelligences like this to develop, and maybe even have an overwhelming tendency to develop in the long term? What if this is one of the vast amount of branches where everyone in the observable universe pretty much failed to have a good enough time and a bright enough future and just offed themselves before interstellar travel etc., because a sufficiently advanced civilization sees it’s just not a big deal in an Everett multiverse?
(There’s probably a lot that I’ve missed here as I have no deep knowledge regarding the MWI, and my reading history so far only touches on this kind of stuff in general, but yay stupid questions thread.)
Not really. If you’re in a suboptimal branch, but still doing better than if you didn’t exist at all, then you aren’t making the world better off by self-destructing regardless of whether other branches exist.
It would not increase the proportion (technically, you want to be talking about measure here, but the distinction isn’t important for this particular discussion) of branches where everything is stellar—just the proportion of branches where everything is stellar out of the total proportion of branches where you are alive, which isn’t so important. To see this, imagine you have two branches, one where things are going poorly and one where things are going great. The proportion of branches where things are going stellar is 1⁄2. Now suppose that the being/society/system that is going poorly self-destructs. The proportion of branches where things are going stellar is still 1⁄2, but now you have a branch where instead of having a being/society/system that is going poorly, you have no being/society/system at all.
Thanks! Ah, I’m probably just typical-minding like there’s no tomorrow, but I find it inconceivable to place much value on the amount of branches you exist in. The perceived continuation of your consciousness will still go on as long as there are beings with your memories in some branch: in general, it seems to me that if you say you “want to keep living”, you mean you want there to be copies of you in some or the possible futures, waking up the next morning doing stuff present-you would have done, recalling what present-you thought yesterday, and so on (in addition you will probably want a low probability for this future to include significant suffering). Likewise, if you say you “want to see humanity flourish indefinitely”, you want a future that includes your biological or cultural peers and offspring colonizing space and all that, remembering and cherishing many of the values you once had (sans significant suffering). To me it seems impossible to assign value to the amount of MWI-copies of you, not least because there is no way you could even conceive their number, or usually make meaningful ethical decisions where you weigh their amounts.* Instead, what matters overwhelmingly more is the probability of any given copy living a high quality life.
Yes, this is obvious of course. What I meant was exactly this, because from the point of view of a set of observers, eliminating the set of observers from a branch ⇔ rendering the branch irrelevant, pretty much.
To me it did feel like this is obviously what’s important, and the branches where you don’t exist simply don’t matter—there’s no one there to observe anything after all, or judge the lack of you to be a loss or morally bad (again, not applicable to individual humans).
If I learned today that I have a 1% chance to develop a maybe-terminal, certainly suffering-causing cancer tomorrow, and I could press a button to just eliminate the branches where that happens, I would not have thought I am committing a moral atrocy. I would not feel like I am killing myself just because part of my future copies never get to exist, nor would I feel bad for the copies of the rest of all people—no one would ever notice anything, vast amounts of future copies of current people would wake up just like they thought they would the next morning, and carry on with their lives and aspirations. But this is certainly something I should learn to understand better before anyone gives me a world-destroying cancer cure button.
*Which is one main difference when comparing this to regular old population ethics, I suppose.
As it happens, you totally can (it’s called the Born measure, and it’s the same number as what people used to think was the probabilities of different branches occurring), and agents that satisfy sane decision-theoretic criteria weight branches by their Born measure—see this paper for the details.
This is a good place to strengthen intuition, since if you replace “killing myself” with “torturing myself”, it’s still true that none of your future selves who remain alive/untortured “would ever notice anything, vast amounts of future copies of [yourself] would wake up just like they thought they would the nloext morning, and carry on with their lives and aspirations”. If you arrange for yourself to be tortured in some branches and not others, you wake up just as normal and live an ordinary, fulfilling life—but you also wake up and get tortured. Similarly, if you arrange for yourself to be killed in some branches and not others, you wake up just as normal and live an ordinary, fulfilling life—but you also get killed (which is presumably a bad thing even or especially if everybody else also dies).
One way to intuitively see that this way of thinking is going to get you in trouble is to note that your preferences, as stated, aren’t continuous as a function of reality. You’re saying that universes where (1-x) proportion of branches feature you being dead and x proportion of branches feature you being alive are all equally fine for all x > 0, but that a universe where you are dead with proportion 1 and alive with proportion 0 would be awful (well, you didn’t actually say that, but otherwise you would be fine with killing some of your possible future selves in a classical universe). However, there is basically no difference between a universe where (1-epsilon) proportion of branches feature you being dead and epsilon proportion of branches feature you being alive, and a universe where 1 proportion of branches feature you being dead and 0 proportion of branches feature you being alive (since don’t forget, MWI looks like a superposition of waves, not a collection of separate universes). This is the sort of thing which is liable to lead to crazy behaviour.
I’m sorry, but “sort of thing which is liable to lead to crazy behaviour” won’t cut it. Could you give an example of crazy behaviour with this preference ordering? I still think this approach (not counting measure as long as some of me exists) feels right and is what I want. I’m not too worried about discontinuity at only x=0 (and if you look at larger multiverses, x probably never equals 0.)
To argue over a specific example: if I set up something that chooses a number randomly with quantum noise, then buys a lottery ticket, then kills me (in my sleep) only if the ticket doesn’t win, then I assign positive utility to turning the machine on. (Assuming I don’t give a damn about the rest of the world who will have to manage without me.) Can you turn this into either an incoherent preference, or an obviously wrong preference?
(Personally, I’ve thought about the TDT argument for not doing that; because you don’t want everyone else to do it and create worlds in which only 1 person who would do it is left in each, but I’m not convinced that there are a significant number of people who would follow my decision on this. If I ever meet someone like that, I might team up with them to ensure we’d both end up in the same world. I haven’t seen any analysis of TDT/anthropics applied to this problem, perhaps because other people care more about the world?)
Another way to look at it is this: imagine you wake up after the bet, and don’t yet know whether you are going to quickly be killed or whether you are about to recieve a large cash prize. It turns out that your subjective credence for which branch you are in is given by the Born measure. Therefore, (assuming that not taking the bet maximises expected utility in the single-world case), you’re going to wish that you hadn’t taken the bet immediately after taking it, without learning anything new or changing your mind about anything. Thus, your preferences as stated either involve weird time inconsistencies, or care about whether there’s a tiny sliver of time between the worlds branching off and being killed. At any rate, in any practical situation, that tiny sliver of time is going to exist, so if you don’t want to immediately regret your decision, you should maximise expected utility with respect to the Born measure, and not discount worlds where you die.
Your preference already feels “obviously wrong” to me, and I’ll try to explain why. If we imagine that only one world exists, but we don’t know how it will evolve, I wouldn’t take the analogue of your lottery ticket example, and I suspect that you wouldn’t either. The reason that I wouldn’t do this is because I care about the possible future worlds where I would die, despite the fact that I wouldn’t exist there (after very long). I’m not sure what other reason there would be to reject this bet in the single-world case. However, you are saying that you don’t care about the actual future worlds where you die in the many-worlds case, which seems bizarre and inconsistent with what I imagine your preferences would be in the single-world case. It’s possible that I’m wrong about what your preferences would be in the single-world case, but then you’re acting according to the Born rule anyway, and whether the MWI is true doesn’t enter into it.
(EDIT: that last sentence is wrong, you aren’t acting according to the Born rule anyway.)
In regards to my point about discontinuity, it’s worth knowing that to know whether x = 0 or x > 0, you need infinitely precise knowledge of the wave function. It strikes me as unreasonable and off-putting that no finite amount of information about the state of the universe can discern between one universe which you think is totally fantastic and another universe which you think is terrible and awful. That being said, I can imagine someone being unpersuaded by this argument. If you are willing to accept discontinuity, then you get a theory where you are still maximising expected utility with respect to the Born rule, but your utilities can be infinite or infinitesimal.
On a slightly different note, I would highly recommend reading the paper which I linked (most of which I think is comprehensible without a huge amount of technical background), which motivates the axioms you need for the Born rule to work, and dismotivates other decision rules.
EDIT: Also, I’m sorry about the “sort of thing which is liable to lead to crazy behaviour” thing, it was a long comment and my computer had already crashed once in the middle of composing it, so I really didn’t want to write more.
I downloaded the paper you linked to and will read it shortly. I’m totally sympathetic to the “didn’t want to make a long comment longer” excuse, having felt that way many times myself.
I agree in the single-world case, I wouldn’t want to do it. That’s not because I care about the single world without me per se (as in caring for the people in the world), but because I care about myself who would not exist with ~1 probability. In a multiverse, I still exist with ~1 probability. You can argue that I can’t know for sure that I live in a multiverse, which is one of the reasons I’m still alive in your world (the main reason being it’s not practical for me right now, and I’m not really confident enough to bother researching and setting something like that up.) However, you also don’t know that anything you do is safe, by which I mean things like driving, walking outside, etc. (I’d say those things are far more rational in a multiverse, anyway, but even people who believe in single world still do these things.)
Another reason I don’t have a problem with discontinuity is that the whole problem seems only to arise when you have an infinite number of worlds, and I just don’t feel like that argument is convincing.
I don’t think you need infinite knowledge to know whether x=0 or x>0, especially if you give some probability to higher level multiverses. You don’t need to know for sure that x>0 (as you can’t know anyway), but you can have 99.9% confidence that x>0 rather easily, conditional on MWI being true. As I explained, that is enough to take risks.
If I wake up after, in my case that I laid out, that would mean that I won, as I specified I would be killed while asleep. I could even specify that the entire lotto picking,noise generation, and checking is done while I sleep, so I don’t have to worry about it. That said, I don’t think the question of my subjective expectation of no longer existing is well-defined, because I don’t have a subjective experience if I no longer exist. If am cloned, then told one of me is going to be vaporized without any further notice, and it happens fast enough not to have them feel anything, then my subjective expectation is 100% to survive. That’s different from the torture case you mentioned above, where I expect to survive, and have subjective experiences. I think we do have some more fundamental disagreement about anthropics, which I don’t want to argue over until I hash out my viewpoint more. (Incidentally, it seemed to me that Eliezer agrees with me at least partly, from what he writes in http://lesswrong.com/lw/14h/the_hero_with_a_thousand_chances/:
I interpreted that as saying that you can only rely on the anthropic principle (and super quantum psychic powers), if you die without pain.)
I’m actually planning to write a post about Big Worlds, anthropics, and some other topics, but I’ve got other things and am continuously putting it off. Eventually. I’d ideally like to finish some anthropics books and papers, including Bostrom’s, first.
Another, more concise way of putting my troubles with discontinuity: I think that your utility function over universes should be a computable function, and the computable functions are continuous.
Also—what, you have better things to do with your time than read long academic papers about philosophy of physics right now because an internet stranger told you to?!
Here’s the thing: you obviously think that you dying is a bad thing. You apparently like living. Even if the probability were 20-80 of you dying, I imagine you still wouldn’t take the bet (in the single-world case) if the reward were only a few dollars, even though you would likely survive. This indicates that you care about possible futures where you don’t exist—not in the sense that you care about people in those futures, but that you count those futures in your decision algorithm, and weigh them negatively. By analogy, I think you should care about branches where you die—not in the sense that you care about the welfare of the people in them, but that you should take those branches into account in your decision algorithm, and weigh them negatively.
I’m not sure what you can mean by this comment, especially “the whole problem”. My arguments against discontinuity still apply even if you only have a superposition of two worlds, one with amplitude sqrt(x) and another with amplitude sqrt(1-x).
… I promise that you aren’t going to be able to perform a test on a qubit alpha|0rangle beta|1rangle that you can expect to tell you with 100% certainty that alpha=0, even if you have multiple identical qubits.
This wasn’t my point. My point was that your preferences make huge value distinctions between universes that are almost identical (and in fact arbitrarily close to identical). Even though your value function is technically a function of the physical state of the universe, it’s like it may as well not be, because arbitrary amounts of knowledge about the physical state of the universe still can’t distinguish between types of universes which you value very different amounts. This intuitively seems irrational and crazy to me in and of itself, but YMMV.
I find it highly implausible that this should make a difference for your decision algorithm. Imagine that you could extend your life in all branches by a few seconds in which you are totally blissful. I imagine that this would be a pleasant change, and therefore preferable. You can then contemplate what will happen next in your pleasant state, and if my arguments go through, this would mean that your original decision was bad. So, we have a situation where you used to prefer taking the bet to not taking the bet, but when we made the bet sweeter, you know prefer not taking the bet. This seems irrational.
I think it is actually well-defined? Right now, even if I were told that no multiverse exists, I would be pretty sure that I would continue living, even though I wouldn’t be having experiences if I were dead. I think the problem here is that you are confusing my invocation of subjective probabilities (while you’re pondering what will happen next in your branch) of what will objectively happen next with a statement about subjective experiences later.
I would be interested in reading your viewpoints about anthropics, should you publish them. That being said, given that you don’t take the suicide bet in the single-world case, I think that we probably don’t.