Are people close to you aware that this is a reason that you advocate cryonics?
qmotus
What cosmological assumptions? Assumptions related to identity, perhaps, as discussed here. But it seems to me that MWI essentially guarantees that for every observer-moment, there will always exist a “subsequent” one, and the same seems to apply to all levels of a Tegmark multiverse.
(I’m not convinced that the universe is large enough for patternism to actually imply subjective immortality.)
Why wouldn’t it be? That conclusion follows logically from many physical theories that are currently taken quite seriously.
I’m not willing to decipher your second question because this theme bothers me enough as it is, but I’ll just say that I’m amazed figuring this stuff out is not considered a higher priority by rationalists. If at some point someone can definitely tell me what to think about this, I’d be glad about it.
I guess we’ve had this discussion before, but: the difference between patternism and your version of subjective mortality is that in your version we nevertheless should not expect to exist indefinitely.
I feel like it’s rather obvious that this is approximately what is meant. The people who talk of democratizing AI are, mostly, not speaking about superintelligence or do not see it as a threat (with the exception of Elon Musk, maybe).
You also can’t know if you’re in a simulation, a Big quantum world, a big cosmological world, or if you’re a reincarnation
But you can make estimates of the probabilities (EY’s estimate of the big quantum world part, for example, is very close to 1).
So really I just go with my gut and try to generally make decisions that I probably won’t think are stupid later given my current state of knowledge.
That just sounds pretty difficult, as my estimate of whether a decision is stupid or not may depend hugely on the assumptions I make about the world. In some cases, the decision that would be not-stupid in a big world scenario could be the complete opposite of what would make sense in a non-big world situation.
If you’re looking for what these probabilities tell us about the underlying “reality”
I am. It seems to me that if quantum mechanics is about probabilities, then those probabilities have to be about something: essentially, this seems to suggest that either the underlying reality is unknown, indicating that quantum mechanics needs to be modified somehow, or that Qbism is more like an “interpretation of MWI”, where one chooses to only care about the one world she finds herself in.
Fortunately, Native American populations didn’t plummet because they were intentionally killed, they mostly did so because of diseases brought by Europeans.
Thanks for the tip. I suppose I actually used to be pretty good at not giving too many fucks. I’ve always cared about stuff like human rights or climate change or, more lately, AI risk, but I’ve never really lost much sleep over them. Basically, I think it would be nice if we solved those problems and, but the idea that humanity might go extinct in the future doesn’t cause me too much headache in itself. The trouble is, I think, that I’ve lately begun to think that I may have a personal stake in this stuff, the point illustrated by the EY post that I linked to. See also my reply to moridinamael.
The part about not being excited about anything sounds very accurate and is certainly a part of the problem. I’ve also tried just taking up projects and focusing on them, but I should probably try harder as well.
However, a big part of the problem is that it’s not just that those things feel insignificant; it’s also that I have a vague feeling that I’m sort of putting my own well-being in jeopardy by doing that. As I said, I’m very confused about things like life, death and existence, on a personal level. How do I focus on mundane things when I’m confused about basic things such as whether I (or anyone) else should expect to eventually die or to experience a weird-ass form of subjective anthropic immortality, and about what that actually means? Should that make me act somehow?
- 21 Oct 2016 10:33 UTC; 0 points) 's comment on Open thread, Oct. 17 - Oct. 23, 2016 by (
I’m having trouble figuring out what to prioritize in my life. In principle, I have a pretty good idea of what I’d like to do: for a while I have considered doing a Ph.D in a field that is not really high impact, but not entirely useful either, combining work that is interesting (to me personally) and hopefully a modest salary that I could donate to worthwhile causes.
But it often feels like this is not enough. Similar to what another user posted here a while ago, reading LessWrong and about effective altruism has made me feel like nothing except AI and maybe a few other existential risks are worth focusing on (not even things that I still consider to be enormously important relative to some others). In principle I could focus on those, as well. I’m not intelligent enough to do serious work on Friendly AI, but I probably could transition, relatively quickly, to working on machine learning and in data science, with perhaps some opportunities to contribute and likely higher earnings.
The biggest problem, however, is that whenever I seem to be on track towards doing something useful and interesting, a monumental existential confusion kicks in and my productivity plummets. This is mostly related to thinking about life and death.
EY recently suggested that we should care about solving AGI alignment because of quantum immortality (or its cousins). This is a subject that has greatly troubled me for a long time. Thinking logically, big world immortality seems like an inescapable conclusion from some fairly basic assumption. On the other hand, the whole idea feels completely absurd.
Having to take that seriously, even if I don’t believe in it 100 percent, has made it difficult for me to find joy in the things that I do. Combining big world immortality with other usual ideas regarding existential risks and so on that are prevalent in the LW memespace sort of suggests that the most likely outcome I (or anybody else) can expect in the long run is surviving indefinitely as the only remaining human, or nearly certainly as the only remaining person among those that I currently know. Probably in an increasingly bad health as well.
It doesn’t help that I’ve never been that interested in living for a very long time, like most transhumanists seem to be. Sure, I think aging and death are problems that we should eventually solve, and in principle I don’t have anything against living for a significantly longer time than the average human lifespan, but it’s not something that I’ve been very interested in actively seeking and if there’s a significant risk that those very many years would not be very comfortable, then I quickly lose interest. So the theories that sort of make this whole death business seem like an illusion are difficult to me. And overall, the idea does make the mundane things that I do now seem even more meaningless. Obviously, this is taking its toll on my relationships with other people as well.
This has also led me to approach related topics a lot less rationally than I probably should. Because of this, I think both my estimate of the severity of the UFAI problem and our ability to solve this has gone up, as has my estimate of the likelihood that we’ll be able to beat aging in my lifetime—because those are things that seem to be necessary to escape the depressing conclusions I’ve pointed out.
I’m not good enough at fooling myself, though. As I said, my ability to concentrate on doing anything useful is very weak nowadays. It actually often feels easier to do something that I know is an outright waste of time but gives something to think about, like watching YouTube, playing video games or drinking beer.
I would appreciate any input. Given how seriously people here take things like the simulation argument, the singularity or MWI, existential confusion cannot be that uncommon. How do people usually deal with this kind of stuff?
I’m certainly not an instrumentalist. But the argument that MWI supporters (and some critics, like Penrose) generally make, and which I’ve found persuasive, is that MWI is simply what you get if you take quantum mechanics at face value. Theories like GRW have modifications to the well-established formalism that we, as far as I know, have no empirical confirmation of.
Fair enough. I feel like I have a fairly good intuitive understanding of quantum mechanics, but it’s still almost entirely intuitive, and so is probably entirely inadequate beyond this point. But I’ve read speculations like this, and it sounds like things can get interesting: it’s just that it’s unclear to me how seriously we should take them at this stage, and also some of them take MWI as a starting point, too.
Regarding QBism, my idea of it is mostly based on a very short presentation of it by Rüdiger Schack at a panel, and the thing that confuses me is that if quantum mechanics is entirely about probability, then what do those probabilities tell us about?
I’m not sure what you mean by OR, but if it refers to Penrose’s interpretation (my guess, because it sounds like Orch-OR), then I believe that it indeed changes QM as a theory.
Guess I’ll have to read that paper and see how much of it I can understand. Just at a glance, it seems that in the end they propose one of the modified theories like GRW interpretation might be the right way forward. I guess that’s possible, but how seriously should we take those when we have no empirical reasons to prefer them?
If it doesn’t fundamentally change quantum mechanics as a theory, is the picture likely to turn out fundamentally different from MWI? Roger Penrose, a vocal MWI critic, seems to wholeheartedly agree that QM implies MWI; it’s just that he thinks that this means the theory is wrong. David Deutsch, I believe, has said that he’s not certain that quantum mechanics is correct; but any modification of the theory, according to him, is unlikely to do away with the parallel universes.
QBism, too, seems to me to essentially accept the MWI picture as the underlying ontology, but then says that we should only care about the worlds that we actually observe (Sean Carroll has presented criticism similar to this, and mentioned that it sounds more like therapy to him), although it could be that I’ve misunderstood something.
Do you think that we’re likely to find something in those directions that would give a reason to prefer some other interpretation than MWI?
It could be that reality has nasty things in mind for us that we can’t yet see and that we cannot affect in any way, and therefore I would be happier if I didn’t know of them in advance. Encountering a new idea like this that somebody has discovered is one my constant worries when browsing this site.
Actually, I’m just interested. I’ve been wondering if big world immortality is a subject that would make people a) think that the speaker is nuts, b) freak out and possibly go nuts or c) go nuts because they think the speaker is crazy; and whether or not it’s a bad idea to bring it up.