Evolution gave us a strong instinct to not die, but evolution also gave us the false impression that our progression through time resembled a line rather than a tree, and that there’s only one planet earth. Knowing now that you are (the algorithm of) a tree, perhaps it is worth rethinking the dying=bad idea? Death, if used selectively, could mean a very happy (if less dense) tree.
If we live in a big world, this logic becomes very compelling. Who cares about killing 99% of yourself if you’re infinite anyway, and the upside is that you end up with an infinite amount of happiness rather than an infinite sad/happy mixture?
I can’t tell if you’re playing devil’s advocate or not… Surely you’ve heard of the categorical imperative and can predict the radical decrease in the happiness density of the universe if that was the reasoning employed by the all sapient beings.
To be precise, the argument would run that the universe will end up being dominated by beings that care more about their measure, and so there is a categorical imperative for happier beings to care more about their measure.
I’m not following. If all sapient beings applied this reasoning, only the most happy would decide not to die, and the happiness density would increase.
I’m thinking most intelligences would kill themselves a lot in this scenario leading to a very empty universe for any particular one of them. The relevant density is “super happy entity per cubic parsec” not “super happy entity per total entities”.
Consider, right now, if all members of some religion killed themselves unless their miracles started coming true. From the perspective of almost all the measure of non-members of the religion, it would look like a simple suicide cult.
Or imagine the LHC really could create a black hole and destroy the earth. Everyone votes on a low probability positive event and we trigger the LHC if it doesn’t happen. From the perspective of the measure of almost all the aliens in the universe (if they exist) our sun has a black hole orbiting at 93 million miles.
If this sort of process was constantly happening among all intelligent species on all planets, we’d be in an empty universe (well, one with a lot of little blackholes anyway). The probability of running into other intelligent life “post anthropic principle” would be their practically non-existent measure times our practically non-existent measure.
Something I’ve actually wondered about is whether the first replicating molecule with the evolutionary potential to generate intelligent life was radically unlikely (requiring a feat of quantum chicanery), and that’s why the universe appears empty to us. I don’t know of anyone who published this first, but I assume someone beat me to it because it often seems to me that all thinkable thoughts have generally been generated by someone else decades or centuries ago :-P
If this sort of process was constantly happening among all intelligent species on all planets, we’d be in an empty universe (well, one with a lot of little blackholes anyway). The probability of running into other intelligent life “post anthropic principle” would be their practically non-existent measure times our practically non-existent measure.
Huh.
That’s the most interesting explanation for the Fermi paradox in a while. (Not exactly plausible, mind you, but an interesting idea nevertheless.)
Sure, if everyone realized what a great idea quantum suicide was. But I think you can rest assured that that’s not going to happen. Assuming, that is, that it is actually a good idea…
Also I don’t govern my action with the categorical imperative. It works in some cases, but in general it is awful.
You have to assume that everyone will join in on this scheme, if you’re trying to argue in favor of it. If only a limited subset of people kill themselves when they’re unhappy, then that leaves a huge number of people mourning the (to them) meaningless death of their loved ones. You’d have to not only kill yourself, but also make sure that anyone who was hurt by your death died as well.
I was assuming that you were unconcerned with the sadness/mourning of those around you, or were prepared to make that tradeoff for some reason. (For example, egoism, or perhaps lack of friends/relations, or extreme need for the money)
And what’s wrong with this idea?
Evolution gave us a strong instinct to not die, but evolution also gave us the false impression that our progression through time resembled a line rather than a tree, and that there’s only one planet earth. Knowing now that you are (the algorithm of) a tree, perhaps it is worth rethinking the dying=bad idea? Death, if used selectively, could mean a very happy (if less dense) tree.
If we live in a big world, this logic becomes very compelling. Who cares about killing 99% of yourself if you’re infinite anyway, and the upside is that you end up with an infinite amount of happiness rather than an infinite sad/happy mixture?
I can’t tell if you’re playing devil’s advocate or not… Surely you’ve heard of the categorical imperative and can predict the radical decrease in the happiness density of the universe if that was the reasoning employed by the all sapient beings.
To be precise, the argument would run that the universe will end up being dominated by beings that care more about their measure, and so there is a categorical imperative for happier beings to care more about their measure.
I’m not following. If all sapient beings applied this reasoning, only the most happy would decide not to die, and the happiness density would increase.
Wrote this and hit reload, but Kaj beat me to it.
I’m thinking most intelligences would kill themselves a lot in this scenario leading to a very empty universe for any particular one of them. The relevant density is “super happy entity per cubic parsec” not “super happy entity per total entities”.
Consider, right now, if all members of some religion killed themselves unless their miracles started coming true. From the perspective of almost all the measure of non-members of the religion, it would look like a simple suicide cult.
Or imagine the LHC really could create a black hole and destroy the earth. Everyone votes on a low probability positive event and we trigger the LHC if it doesn’t happen. From the perspective of the measure of almost all the aliens in the universe (if they exist) our sun has a black hole orbiting at 93 million miles.
If this sort of process was constantly happening among all intelligent species on all planets, we’d be in an empty universe (well, one with a lot of little blackholes anyway). The probability of running into other intelligent life “post anthropic principle” would be their practically non-existent measure times our practically non-existent measure.
Something I’ve actually wondered about is whether the first replicating molecule with the evolutionary potential to generate intelligent life was radically unlikely (requiring a feat of quantum chicanery), and that’s why the universe appears empty to us. I don’t know of anyone who published this first, but I assume someone beat me to it because it often seems to me that all thinkable thoughts have generally been generated by someone else decades or centuries ago :-P
Huh.
That’s the most interesting explanation for the Fermi paradox in a while. (Not exactly plausible, mind you, but an interesting idea nevertheless.)
I’ve read something like this here.
Sure, if everyone realized what a great idea quantum suicide was. But I think you can rest assured that that’s not going to happen. Assuming, that is, that it is actually a good idea…
Also I don’t govern my action with the categorical imperative. It works in some cases, but in general it is awful.
You have to assume that everyone will join in on this scheme, if you’re trying to argue in favor of it. If only a limited subset of people kill themselves when they’re unhappy, then that leaves a huge number of people mourning the (to them) meaningless death of their loved ones. You’d have to not only kill yourself, but also make sure that anyone who was hurt by your death died as well.
I was assuming that you were unconcerned with the sadness/mourning of those around you, or were prepared to make that tradeoff for some reason. (For example, egoism, or perhaps lack of friends/relations, or extreme need for the money)
Huh. Copenhagen interpretation of quantum mechanics isn’t pretty, but I’m not ready to die for it.