I don’t think we need to answer these questions to agree that many people would prefer to live longer than they currently are able. I have no idea what problems need to be solved to enable people to happily and productively live thousands of years, but I also have no reason to believe they’re insurmountable.
“I don’t think we need to answer these questions to agree that many people would prefer to live longer than they currently are able.”
Certainly, but that’s not the issue here. The issue here is immortality. Many transhumanists desire to live forever, literally. Myself included. In fact I believe that many people in general do. Extending the human lifespan to thousands of years would be a great victory already, but that doesn’t invalidate the search for true immortality, if people are interested in such which I’m sure some are.
“I have no idea what problems need to be solved to enable people to happily and productively live thousands of years, but I also have no reason to believe they’re insurmountable.”
Once again you’re deviating from the question. I haven’t questioned the possibility of living a few thousand years, but forever properly.
The thing that appeals to me about “immortality” is not having to deal with the problem of my lifespan being limited.
Actual infinity/eternity is really long time. I mean, I could live for a million (or billion, or trillion) years, and it still would be an small fraction of eternity. An infinitesimal fraction, in fact. Epsilon.
Do I choose between being forced to exist forever, or to die after less than 100 years of existence? Neither. I’d like to have the option to keep living for as long as I want. I can see the scenario of even people living in utopia eventually growing tired/bored/unhappy with their existence. I can see the scenario of this never happening, a person choosing to truly live forever (one day/month/year at a time. not by pre-commiting to living forever).
“Do I choose between being forced to exist forever, or to die after less than 100 years of existence? Neither. I’d like to have the option to keep living for as long as I want.”
I didn’t mean being forced to exist forever, or pre-commiting to anything. I meant that I really do WANT to exist forever, yet I can’t see a way that it can work. That’s the dilemma that I mentioned: to die, ever, even after a gazillion years, feels horrible, because YOU will cease to exist, no matter after how much time. To never die feels just as horrible because I can’t see a way to remain sane after a very long time.
Who can guarantee that after x years you would feel satisfied and ready to die? I believe that as long as the brain remains healthy, it really doesn’t wanna die. And even if it could reach the state of accepting death, the current me just don’t ever wanna cease to exist. Doesn’t the idea of inevitably eventually ceasing to exist feel absolutely horrible to you?
Doesn’t the idea of inevitably eventually ceasing to exist feel absolutely horrible to you?
No. There is nothing I find inherently scary or unpleasant about nonexistence.
I’m just confused about the details of why that would happen. I mean, it would be sad if some future utopia didn’t have a better solution for insanity or for having too many memories, than nonexistence.
Insanity: Look at the algorithm of my mind and see how it’s malfunctioning? If nothing else works, revert my mindstate back a few months/years?
“No. There is nothing I find inherently scary or unpleasant about nonexistence.”
Would you agree that you’re perhaps a minority? That most people are scared/depressed about their own mortality?
“I’m just confused about the details of why that would happen. I mean, it would be sad if some future utopia didn’t have a better solution for insanity or for having too many memories, than nonexistence.
Insanity: Look at the algorithm of my mind and see how it’s malfunctioning? If nothing else works, revert my mindstate back a few months/years?
Memories: offload into long-term storage?”
On insanity, computationalism might be false. Consciousness might not be algorithmic. If it is, you’re right, it’s probably easy to deal with.
But I suspect that excess memories might always remain a problem. Is it really possible to off-load them while maintaining personal identity? That’s an open question in my view.
Specially when, like me, you don’t really buy into computationalism.
I think that it will matter person to person. I’m the type of person that loves learning, and develops new interests all the time. I can’t imagine getting bored with life even after a few centuries. I’m forty-three years old. I can easily imagine three times my current lifetime.
I feel like my memory work similarly to how you describe. As time goes on, my memories are more abstract, with the freshest having the most detail. I don’t think that that is strictly necessary, but instead a limitation of our wetware.
This is just under the assumption that our brains will continue to function as they do now, which I don’t think is a great assumption. With augmented memory, we could have a complete personal record to access and augment our memory.
I’m sure that there will be people that choose to die, and others that choose to deadhead, and those that choose to live with only a rolling century of memories. None of those sound appealing to my imagination at this point.
″ I can’t imagine getting bored with life even after a few centuries.”
Ok, but that’s not a lot of time, is it? Furthermore, this isn’t even a question of time. For me, no finite time is enough. It’s the mere fact of ceasing to exist. Isn’t it simply horrifying? Even if you live a million healthy years, no matter, the fact is that you will cease to exist one day. And then there will be two options. Either your brain will be “healthy” and therefore will dread death as much as it does now, or it will be “unhealthy” and welcome death to relieve it’s poor condition. To me both options seem horrifying—what matters more is that the present me wants to live forever with a healthy, not tired of life, not with dementia or Alzheimer’s, sane brain forever.
Like I said in my post, as time tends to infinite, so do memories, so there will inevitably be a time where the brain no longer holds up. Even the best brain that science could buy in a far future, so to speak. It seems that we will have to cease to exist inevitably.
“I’m sure that there will be people that choose to die, and others that choose to deadhead, and those that choose to live with only a rolling century of memories. None of those sound appealing to my imagination at this point.”
The latter actually sounds kinda appealing to me, if only I was more convinced that it was possible...
Let me put on my sciency-sounding mystical speculation hat:
Under the predictive processing framework, the cortex’s only goal is to minimize prediction error (surprise). This happens in a hierarchical way, with predictions going down and evidence going up, and upper levels of the hierarchy are more abstract, with less spatial and temporal detail.
A visual example: when you stare at a white wall, nothing seems to change, even though the raw visual perceptions change all the time due to light conditions and whatnot. This is because all the observations are consistent with the predictions.
As the brain learns more, you get less and less surprise, and the patterns you see are more and more regular. A small child can play the same game a hundred times and it’s still funny, but adults often see the first episode of a TV show and immediately lose interest because “it’s just another mystery show, nothing new under the sun”.
This means that your internal experience becomes ever more stable. This could explain why time seems to pass much faster the older you get.
Maybe, after you live long enough, your posthuman mind accumulates enough knowledge, and gets even less surprised, you eventually understand everything that is to be understood. Your internal experience is something like “The universe is temporally evolving according to the laws of physics, nothing new under the sun”.
At which moment your perception of time stops completely, and your consciousness becomes a reflection of the true nature of the universe, timeless and eternal.
I think that’s what I would try to do with infinite time, after I get bored of playing videogames.
If continued conscious existence were just a matter of piling on more & more baggage… then yeah, I’d totally agree, there’s an issue here. It’s one facet of deathism’s hidden truth that immortalists seem to be impressively bad at recognizing.
(Speaking from having been caught in that loop for several decades.)
But I don’t think it has to be this way at all. I don’t think consciousness is inherently tiring.
I feel vastly more conscious now than I was 20 years ago, for sure. I also have way more energy and energetic stability than I did 20 years ago. And it’s obvious to me how they’re connected. I’ve consciously taken off a lot of baggage in that time, especially in the last five years.
I think what you’re observing here is that behavioral loops that don’t adapt to context are draining and are guaranteed to end at some point. I mean this in the sense of “God damn it, here I go again having exactly the same relationship problem with yet another person.” Either you find a way to escape these loops, or they eventually kill you.[1]
But there’s totally a skill of breaking out of those loops, and you can get better at that over time rather than worse. As long as your skill growth with them exceeds the rate at which you take on baggage, you’re good!
If you buy all that, then the thing to zoom in on is why you feel like consciousness is tiring. That’ll give you your hint about what structures you’re carrying around in you that are sort of choking out your vitality.
And with all that said… eternity is way longer than I hear basically anyone acknowledging. When I first grokked what quite literally living truly forever would be like, I was horrified. I think that horror is largely about how my current setup relates to eternity. So that’s part of what I’d need to reconfigure in myself if I were to become truly immortal and enjoy it. I so very rarely hear immortalists talk about this, so I’m tempted to think most of them haven’t yet tasted that scale in an intimate way.
I’m skipping basically the whole causal engine here. A quick sketch: Maintaining a loop in defiance of its fit to context takes energy, which builds a kind of tax on your cognition and emotional systems. This in turn creates something kind of like technical debt at the metabolic level. This is kind of like saying “Stress is bad for your health.”
You’re coming from a psychologic/spiritual point of view, which is valid. But I think you should perhaps consider a bit more the scientific perspective. Why do people get Alzheimer’s and dementia at old age? Because the brain fails to keep up with all the experience/memory accumulation. The brain gets worn out, basically. My concern is more scientific than anything. Even with the best psychotherapy or the best meditation or even the best brain tinkering possible, as time tends to infinite so do memories and so does “work” for the brain to do, and unfortunately the brain is finite, so it will invariably get overwhelmed eventually.
Like, I don’t doubt that in a few centuries or millenia we would have invented the technology to no longer get our brains worn out at age 100 but only at 1000 or 5000, but I don’t think we’ll ever invent the technology to avoid it past age 1 billion (just a gross estimate of course).
Personally, I’m only 30 and I don’t feel tired of living at all, fortunately. Like I said, I wanna live forever. But both my intuition and these scientific considerations tell me that it can’t remain like that indefinitely.
I think it is factually correct that we get Alzheimer’s and dementia at old age because the brain gets worn out. Whether it is because of failing to keep up with all the memory accumulation could be more speculative. So I admit that I shouldn’t have made that claim.
But the brain gets worn out from what? Doing its job. And what’s its job...?
Anyway, I think it would be more productive to at least present an explanation in a couple of lines rather than only saying that I’m wrong.
Alzheimer’s is a hardware problem, not a software one. You’re describing a software failure: failure to keep up with experience. If that is a thing, Alzheimer’s isn’t evidence for it.
Can we really separate them? I’m sure that the limitations of consciousness (software) have a physical base (hardware). I’m sure we could find the physical correlates of “failure to keep up with experience”, as well as we could find the physical correlates of why someone who doesn’t sleep for a few days starts failing to keep up with experience as well.
It all translates down to hardware at the end.
But anyway I’ll say again that I admitted it was speculative and not the best example.
We don’t really know how memory works, do we? Most memories fade away with time, except some really strong ones, which are constantly renewed, slightly changed each time. So, odds are, you would not need to explicitly delete anything, it fades away with disuse. And yet, most people* maintain the illusion of personal identity without any effort. So, I don’t expect memory accumulation to be an obstacle to eternal youth. Also, plenty of time to work on brain augmentations and memory offloading to external storage :)
“So, odds are, you would not need to explicitly delete anything, it fades away with disuse.”
I don’t know. Even some old people feel overwhelmed with so many memories. The brain does some clean-up, for sure. But I doubt whether it would work for really long timelines.
“So, I don’t expect memory accumulation to be an obstacle to eternal youth. Also, plenty of time to work on brain augmentations and memory offloading to external storage :)”
Mind you that your personal identity is dependent on your “story”, which has to encompass all your life, even if only the very key moments. My concern is that, as time tends to infinite, so does the “backbone” of memory, i.e. the minimum necessary, after “trimming” with the best technology possible, to maintain personal identity which should include memories from all along the timeline. So the question is whether a finite apparatus, the brain, can keep up with infinite data. Is it possible to remain you while not remembering all your story? And I don’t even care about not remembering all my story. I just care about remaining me.
We know that a computer can theoretically function forever. Just keep repairing its parts (same could be done with the brain, no doubt) and deleting excess data. But the computer doesn’t have a consciousness / personal identity which is dependent on its data. So computationalism might be leading us astray here. (Yes, I don’t like computationalism/functionalism.)
Note: this is all speculation, of which I’m quite uncertain of. Before all the downvotes come (not that I care, but just to make it clear anyway).
I have a feeling that while you understand the wrongness of deathism on an intellectual level it still rings true for you emotionally. The phrases such as
We need sleep, and death seems to be a necessary final sleep, unfortunately.
or
Personally, I often suspect there is no way for consciousness to exist without eventually ceasing.
seem to be just the conventional deathist wisdom indcrinated in people from childhood. Try to grapple with this statements. Why exactly are you pessemistic, instead of optimistic about the matter?
It does ring true to me a bit. How could it not, when one cannot imagine a way to exist forever with sanity? Have you ever stopped to imagine, just relying on your intuition, what would be like to live for a quadrillion years? I’m not talking about a cute few thousand like most people imagine when we talk about immortality. I’m talking about proper gazillions, so to speak. Doesn’t it scare the sh*t out of you? Just like Valentine says in his comment, it’s curious how very few transhumanists have ever stopped to stare at this abyss.
On the other hand I don’t think anyone hates death more than me. It truly makes me utterly depressed and hopeless. It’s just that I don’t see any possible alternative to it. That’s why I’m pessimistic about the matter—both my intuition and reasoning really point to the idea that it’s technological impossible for any conscious being to exist for a quadrillion years, although not to 100% certainty. Maybe 70-80%.
The ideal situation was that we lived forever but only ever remembered a short amount of time, so that we would always feel “fresh” (i.e. not go totally insane). I’m just not sure if that’s possible.
My intuition doesn’t differ much whether it’s a thousand or a quadrillion years. I’m feeling enthusiastic to try to make it work out, instead of being afraid that it won’t.
It’s true that I lack the gear-level model explainig how it’s possible for me to exist for quadrillion years. But neither do I have a gear-level model, explaining how it’s impossible. I know that I still have some confusion about consciousness and identity, but this doesn’t allow me shift the probability either way. For every argument “what if it’s impossible to do x and x is required to exist for quadrillion years” I can automatically construct counter arguments like “what if it’s actually possible to do x” or “what if x is not required”. How do you manage to get 70-80% confidence level here? This sounds overconfident to me.
“I’m feeling enthusiastic to try to make it work out, instead of being afraid that it won’t.”
Well, for someone who’s accusing me of emotionally still defending a wrong mainstream norm (deathism), you’re also doing it yourself by espousing empty positivism. Is it honest to feel enthusiastic about something when your probabilities are grim? The probabilities should come first, not how you feel about it.
“It’s true that I lack the gear-level model explainig how it’s possible for me to exist for quadrillion years.”
Well I do have one to prove the opposite: the brain is finite, and as time tends to infinite so do memories, and it might be impossible to trim memories like we do in a computer without destroying the self.
“For every argument “what if it’s impossible to do x and x is required to exist for quadrillion years” I can automatically construct counter arguments like “what if it’s actually possible to do x” or “what if x is not required”.”
That’s fine! Are we allowed to have different opinions?
“How do you manage to get 70-80% confidence level here? This sounds overconfident to me.”
Could be. I’ll admit that it’s a prediction based more on intuition than reasoning, so it’s not of the highest value anyway.
Well, for someone who’s accusing me of emotionally still defending a wrong mainstream norm (deathism), you’re also doing it yourself by espousing empty positivism.
I wasn’t really planning it as an accusation. It was supposed to be a potentially helpful hint at the source of your cognitive dissonance. Sorry, it seems that I failed to convey it properly.
Is it honest to feel enthusiastic about something when your probabilities are grim? The probabilities should come first, not how you feel about it.
Previously you mentioned being scared due to imagining to live for a quadrillion years. I thought it would be appropriate to share my own emotional reaction as well. I agree that probabilities should go first. And that’s the thing I do not see them as grim. For me it’s more or less 50-50. I’m not competent enough regarding future scientific discoveries and true laws of nature to shift them from this baseline. And I doubt anyone of currently living really is. That’s why I’m surprised by your pessimism.
Well I do have one to prove the opposite: the brain is finite, and as time tends to infinite so do memories, and it might be impossible to trim memories like we do in a computer without destroying the self.
You may notice that the whole argument is based on “it might be impossible”. I agree that it can be the case. But I don’t see how it’s more likely than “it might be possible”.
“You may notice that the whole argument is based on “it might be impossible”. I agree that it can be the case. But I don’t see how it’s more likely than “it might be possible”.”
I never said anything to the contrary. Are we allowed to discuss things that we’re not sure whether it “might be possible” or not? It seems that you’re against this.
I have many problems, but boredom has never been one of them, not even one second. So maybe boredom could be programmed out from the AI software somehow? Once you have a digital mind you could make it feel whatever you wanted. I wouldn’t even mind being stuck in an eternal time loop of some sort.
The concern here is not boredom. I even believe that boredom could be solved with some perfect drug or whatever. The concern here is whether a consciousness identity can properly exist forever without inevitably degrading.
Perhaps our main difference is that you seem to believe in computationalism, while I don’t. I think consciousness is something fundamentally different from a computer program or any other kind of information. It’s experience, which is beyond information.
Draw a boundary around the part of your brain that apparently contains more than compute because it produces those sentences. This presumably excludes your visual cortex, your episodic memory, and some other parts. There are now machine models that can recognize faces with mere compute, so probably the part of you that suggests that a cloud looks like a face is also on the outside. I expect you could produce that sense of having experience even if you didn’t have language to put it into words, so we should be able to pull your language cortex out of the boundary without pulling out anything but compute.
The outside only works in terms of information. It increasingly looks like you can shrink the boundary until you could replace the inside with a rock that says “I sure seem to be having experiences.”, without any changes to what information crosses the boundary. Whatever purpose evolution might have had for equipping us with such a sense, it seems easier for it to put in an illusion than to actually implement something that, to all appearances, isn’t made of atoms.
“There are now machine models that can recognize faces with mere compute, so probably the part of you that suggests that a cloud looks like a face is also on the outside.”
Modern computers could theoretically do anything that a human does, except experience it. I can’t draw a line around the part of my brain responsible for it because there is probably none, it’s all of it. Even though I’m no neurologist. But from the little I know the brain has an integrated architecture.
Maybe in the future we could make conscious silicon machines (or of whatever material), but I still maintain that the brain is not a Turing machine—or at least not only.
“The outside only works in terms of information.”
Could be. The mind processes information, but it is not information (this is an intuitive opinion, and so is yours).
“Whatever purpose evolution might have had for equipping us with such a sense, it seems easier for it to put in an illusion than to actually implement something that, to all appearances, isn’t made of atoms.”
Now we’ve arrived at my favorite part of the computationalist discourse: to claim or suggest that consciousness is an illusion. I think that all that can’t be an illusion is consciousness. All that certainly exists is consciousness.
As for being made of atoms or not, well, information isn’t, either. But it’s expressed by atoms, and so is consciousness.
If one might make a conscious being out of Silicon but not out of a Turing machine, what happens when you run the laws of physics on a Turing machine and have simulated humans arise for the same reason they did in our universe, which have conversations like ours?
I think that all that can’t be an illusion is consciousness. All that certainly exists is consciousness.
What do you mean by “certainly exists”? One sure could subject someone to an illusion that he is not being subjected to an illusion.
“if one might make a conscious being out of Silicon but not out of a Turing machine”
I also doubt that btw.
“what happens when you run the laws of physics on a Turing machine and have simulated humans arise”
Is physics computable? That’s an open question.
And more importantly, there’s no guarantee that the laws of physics would necessarily generate conscious beings.
Even if it did, could be p-zombies.
“What do you mean by “certainly exists”? One sure could subject someone to an illusion that he is not being subjected to an illusion.”
True. But as long as you have someone, it’s no longer an illusion. It’s like, if you stimulate your pleasure centers with an electrode, and you say “hmmm that feels good”, was the pleasure an illusion? No. It may have been physically an illusion, but not experientially, and the latter is what really matters. Experience is what really matters, or is at least enough to make something real. That consciousness exists is undeniable. “I think, therefore I am.” Experience is the basis of all fact.
Do you agree that there is a set of equations that precisely describes the universe? You can compute the solutions for any system of differential equations through an infinite series of ever finer approximations.
there’s no guarantee that the laws of physics would necessarily generate conscious beings
The Turing machine might calculate the entire tree of all timelines, including this conversation. Do you suggest that there is a manner in which one can run a universe, that only starts to make a difference once life gets far enough, without which the people in it would fail to talk about consciousness?
If we wrote out a complete log of that tree on a ludicrously large piece of paper, and then walked over to the portion of it that describes this conversation, I am not claiming that we should treat the transcript as something worth protecting. I’m claiming that whatever the characters in the transcript have, that’s all we have.
Still, that could all happen with philosophical zombies. A computer agent (AI) doesn’t sleep and can function forever. These 2 factors is what leads me to believe that computers, as we currently define them, won’t ever be alive, even if they ever come to emulate the world perfectly. At best they’ll produce p-zombies.
I’m sorry to see this downvoted to the negatives. I don’t agree with the perspective, but I like the questions it’s raising. And I think they fit in LW’s genre.
Like, I receive this as the author’s main “point”:
Anyone has any thoughts on the matter? To me consciousness is a huge dilemma. To end feels horrible, to never end perhaps even more. Do you believe we’ll ever solve this problem?
I really hope the downvotes aren’t just because people disagree that neverending consciousness might be horrid. I take it as a serious inquiry.
I have my own thoughts on the meat of it. I’ll make that a separate reply though, for threading+voting purposes.
Thanks for the feedback (and the back-up). Well, I’d say that half of what I write on Lesswrong is downvoted and 40% is ignored, so I don’t really care at this point. I don’t think (most of) my opinions are outlandish. I never downvote anything I disagree with myself, and there’s plenty that I disagree with. I only downvote absurd or rude comments. I think that’s the way it should be, so I’m in full agreement on that.
You also got it right on my main point. That’s precisely it. Mind you that “ending” consciousness also feels horrid to me! That’s the dilemma. Would be great if we could find a way to achieve neverending consciousness without it being horrid.
Just have kids. Whatever posttranshuman creature inherits the ghost of your body in a thousand years won’t be “you” in any sense beyond the pettiest interpretation of ego as “continuous memory”, and even that falls apart quickly under scrutiny.
Your offspring are as much “you” as that thousand year ego projection. Except they’re better, because they start a whole new fresh consciousness unfettered with your accumulated prejudices. I advocate we raise children better, wiser, and accept death younger and more usefully.
“Whatever posttranshuman creature inherits the ghost of your body in a thousand years won’t be “you” in any sense beyond the pettiest interpretation of ego as “continuous memory”″
I used to buy into that Buddhist perspective, but I no longer do. I think that’s a sedative, like all religions. Though I will admit that I still meditate, because I still hope to find out that I’m wrong. I hope I do, but I don’t have a lot of hope. My reason and intuition are clear in telling me that the self is extremely valuable, both mine and that of all other conscious beings, and death is a mistake.
Unless you mean to say that they will only be a clone of me. Then you’re right, a clone of me is not me at all, even if it feels exactly like me. But then we would have just failed at life extension anyway. Who’s interested in getting an immortal clone? People are interested in living forever themselves, not someone else. At least if they’re being honest.
“Your offspring are as much “you” as that thousand year ego projection. ”
I’ve been alive for 30 years—not much, I admit, but I still feel as much like me as in the first day that I can remember. I suspect that as long as the brain remains healthy, that will remain so. But I never ever felt “me” in any other conscious being. Again, Buddhist projection. Sedative. Each conscious being is irreplaceable.
Right, that’s my point—the conscious being of your childhood is not replaceable with you now. You are a clone of your dead childhood self. That’s fine for you, the clone. But who’s interested in getting a 30-year-old clone? And the many consciousnesses that flower and die every few decades, will be replaced with the single continuation of a single generation that stumbles into immortality.
>I think that’s a sedative, like all religions
I’m not Buddhist, but your critique extends to yourself. If you take one step back to look at an even broader picture, by replacing religion with ideology, you’ve just reinvented postmodernism. Viz: “Transhumanism, sedative, like all ideologies.” So you either stick with modernism (that transhumanism is the one, special ideology immune from humanity’s tragic need to self-sedate), or dive into the void (which really is just an ocean of sedatives swirling together, I’ve swum deep and I promise it’s ideology all the way down).
Maybe we’re not self-sedating at all, and we can talk to a pharmacist who isn’t just us. It’s hard to say anything about reality when the only thing you know is that you’re high af all the time.
>I’ve been alive for 30 years—not much, I admit, but I still feel as much like me as in the first day that I can remember.
Every day the same sun rises, yet it’s a different day. You aren’t the sun, you’re the day.
Imagine droplets of water trapped in a cup, then poured back into the ocean. Water is consciousness, your mind is the cup.
Each century, the day ends, but your family continues. Every few centuries your family falls apart, but your community thrives. Each millenium your community is ravaged, but your nation lives.
We could go on, to the species, the community of all conscious beings, etc etc. Where you place the self along this line is historically variable. You place it with the continuity of the ego. I place it with my family. There are a zillion answers to this question. Buddhism would say the attempt to answer this question is itself the problem. But I’m not Buddhist.
To meaningfully debate this, we’d have to find out what consciousness is. The academy already moves at a snail pace as science continues to progress one funeral at a time, awaiting the death of precious selves whose lives already had negative value decades ago. Imagine if their reign extended infinitely. But for the grace of Death might we soon unlock Immortality.
Yes, that’s a typical Buddhist-like statement, that we die and are reborn each instant. But I think it’s just incorrect—my childhood self never died. He’s alive right now, here. When I die the biological death, then I will stop existing. It’s as simple as that. Yet I feel like Buddhists, or Eastern religion in general, does this and other mental gymnastics to comfort people.
“So you either stick with modernism (that transhumanism is the one, special ideology immune from humanity’s tragic need to self-sedate), or dive into the void”
There are self-sedating transhumanists, for sure. Like, if you think there isn’t a relevant probability that immortality just won’t work, or if you’re optimistic about the AI control problem, you’re definitely a self-sedating transhumanist. I try to not be one as much as possible, but maybe I am in some areas—no one’s perfect.
But it’s pretty clear that there’s a big difference between transhumanism and religions. The former relies on science to propose solutions to our problems, while the later is based on the teachings of prophets, people who thought that their personal intuitions were the absolute truth. And, in terms of self-sedating ideas, if transhumanism is a small grain of Valium, religion is a big fat tab.
“It’s hard to say anything about reality when the only thing you know is that you’re high af all the time.”
I agree. I claim uncertainty on all my claims.
“Every day the same sun rises, yet it’s a different day. You aren’t the sun, you’re the day.
Imagine droplets of water trapped in a cup, then poured back into the ocean. Water is consciousness, your mind is the cup.”
Yeah, yeah, yeah, I know, I know, I’ve heard the story a thousand times. There’s only one indivisible self/consciousness/being, we’re just an instance of it. Well, you can believe that if you want, I don’t have the scientific evidence to disprove it. But neither have you the evidence to prove it, so I can also disbelieve it. My intuition clearly disbelieves it. When I die biologically it will be blackout. It’s cruel af.
“Imagine if their reign extended infinitely. But for the grace of Death might we soon unlock Immortality.”
Either too deep or I’m too dumb, didn’t quite get it. Please explain less poetically.
I don’t think there is anything particularly scientific about transhumanism relative to other ideologies. They use science to achieve their goals, much like Catholics use science to prove that fetuses have heart beats or whatever.
Really, this debate feels like it boils down to an individualistic vs collectivistic sense of self. In the collectivist view, dying is not that big of a deal. You can see this in action, when dying for your family, country, etc is seen as noble and great. Whereas an individual sacrificing their family to preserve themselves is less lauded (except in Individualist propaganda, where we’re scolded for “judging” and supposed to “understand” the individual circumstances and so on).
I mean, you yourself say it, we have no idea what consciousness even is. Or if it’s valuable at all. We’re just going on a bunch of arbitrary intuitions here. Well, that’s a bad standard. And it’s not like we’re running out of people, we can just create them as needed, indefinitely. So given that
we have a lot of humans, they aren’t a particularly limited resource, and
few if any of the people have super unique, amazing perspectives, such that we really need to preserve that one person for extra time
Why not focus our energy on figuring out what we are, and decide the best course of action from there?
It’s only cruel if you’ve been brainwashed into thinking your life matters. If you never thought that, it’s just normal. Accept your place and die so that our descendants (who we should work really hard to help so they turn out better than we are, and thus deserving of immortality should we ever actually attain it).
But then, if we’ve figured out how to make such amazing people, why not let there be lots of different ones, so they can flourish across many generations, instead of having just one generation forever? I mean, there isn’t even that much to do. Well, there probably is, I’m just too basic to understand it because of my limited natural brain.
I guess transhumanism overall is really cool, taken as a whole package. It’s just the life-extension part I find silly, especially if it’s a priority rather than an afterthought. But even if you want transhumanism, aren’t we far enough from it that the best path towards it is just raising better (smarter, more cooperative, etc) children? Seems like the biggest hindrance to scientific progress is just the state of the quality of human beings.
>Either too deep or I’m too dumb, didn’t quite get it. Please explain less poetically.
Poorly worded on my part. I just mean that it’s thanks to death that we get progress. The old die, and with them out of the way, the young can progress. Lots of fresh new perspectives are a feature of death.
Totally unfair comparison. Do you really think that immortality and utopia are frivolous goals? So maybe you don’t really believe in cryonics or something. Well, I don’t either. But transhumanism is way more than that. I think that its goals with AI and life extension are all but a joke.
That’s reductive. As an altruist, I care about all other conscious being. Of course maintaining sanity demands some distancing, but that’s that. So I’d say I’m a collectivist. But one person doesn’t substitute the other. Others continuing to live will never make up for those who die. The act of ceasing to exist if of the utmost cruelty and there’s nothing that can compensate that.
I have no idea of what consciousness is scientifically, but morally I’m pretty sure it is valuable. All morality comes from the seeking of well-being for the conscious being. So if there’s any value system, consciousness must be at the center. There’s not much explaining here needed, it’s just that everyone wants to be well—and to be.
Like I said every conscious being wants to exist. It’s just the way we’ve been programmed. All beings matter, myself included. I goddamn want to live, that is the basis of all wants and of all rights. Have I been brainwashed? Religions have been brainwashing people about the exact opposite for millenia, that death is ok, either because we go to heaven according to the West, or because we’ll reincarnate or we’re part of a whole according to the East. So, quite on the contrary, I think I have been de-brainwashed.
An unborn person isn’t a tragedy. A death one is. So it’s much more important to care about the living than the unborn.
If most people are saying that AGI is decade(s) off then we aren’t that far.
As for raising children as best as we can I think that’s just common sense.
I partly agree. It would be horrible if Genghis Khan or Hitler never died. But we could always put them in a really good prison. I just don’t wanna die and I think no minimally decent person deserves to, just so we can get rid of a few psychopaths.
Also we’re talking about immortality not now, but in a technological utopia, since only such could produce it. So the dynamics would be different.
As for fresh new perspectives, in this post I propose selective memory deletion with immortality. So that would contribute to that. Even then, getting fresh new perspectives is pretty good, but nowhere near being worth the ceasing of trillions of consciousnesses.
Individualism and altruism aren’t exclusive. I didn’t mean to imply you are selfish, just that your operating definition of self seems informed by a particular tradition.
Consider the perspective of liberal republicans of the 19th century who fought and died for their nation (because that’s where they decided, or were taught, to center their self). Each nation is completely unique and irreplaceable, so we must fight to keep nations thriving and alive, and prevent their extinction. Dying for patriotism is glorious, honorable, etc.
I have no idea of what consciousness is scientifically, but morally I’m pretty sure it is valuable. All morality comes from the seeking of well-being for the conscious being. So if there’s any value system, consciousness must be at the center. There’s not much explaining here needed, it’s just that everyone wants to be well—and to be.
But that’s my point, consciousness will go on just fine without either of us specifically being here. Ending one conscious experience from one body so that a different one can happen seems fine to me, for the most part. I dunno the philosophical implications of this, just thinking.
If most people are saying that AGI is decade(s) off then we aren’t that far.
Yeah, it’s exciting for sure.
I’m 30 as well, so I’ll be near death in the decades that likely begin to birth AGI. But it would likely be able to fathom things unfathomable to us, who knows. History beyond that point is a black hole for me. It’s all basilisks and space jam past 2050 as far as I’m concerned :)
The act of ceasing to exist if of the utmost cruelty and there’s nothing that can compensate that.
Well, I guess that’s it, huh? I don’t think so, but clearly a lot of people do. Btw I’m new to this community, so sorry if I’m uninformed on issues that are well hashed out here. What a fun place, though.
I can see the altruism in dying for a cause. But it’s a leap of faith to claim, from there, that there’s altruism in dying by itself. To die why, to make room for others to get born? Unborn beings don’t exist, they are not moral patients. It would be perfectly fine if no one else was born from now on—in fact it would be better than even 1 single person dying.
Furthermore, if we’re trying to create a technological mature society capable of discovering immortality, perhaps much sooner will it be capable of colonizing other planets. So there are trillions of empty planets to put all the new people before we have to start taking out the old ones.
To die to make room for others just doesn’t make any sense.
“consciousness will go on just fine without either of us specifically being here”
It sure will. But that’s like saying that money will go on just fine if you go bankrupt. I mean, sure, the world will still be full of wealth, but that won’t make you any less poor. Now imagine this happening to everyone inevitably. Sounds really sh*tty to me.
Unborn beings don’t exist, they are not moral patients. It would be perfectly fine if no one else was born from now on—in fact it would be better than even 1 single person dying.
Well, okay, but why? Why don’t tomorrow people matter at all? Is there a real moral normativity that dictates this, or are we just saying our feelings to each other? I don’t mean that condescendingly, just trying to understand where you’re coming from when you make this claim.
I can see the altruism in dying for a cause. But it’s a leap of faith to claim, from there, that there’s altruism in dying by itself.
But I’m arguing for something different from altruism. I go further by saying that the approach to constructing a sense of self differs substantively between people, cultures, etc. Someone who dies for their nation might not be altruistic per se, if they have located their identity primarily in the nation. In other words, they are being selfish, not as their person, but as their nation.
Does that make sense?
Granted, your point about interstellar travel makes all of this irrelevant. But I’m much more cynical about humanity’s future. Or at least, the future of the humans I actually see around me. Technology here is so behind. Growing your own food as a rational way to supplement income is common, education ends for most people at age 12, the vast majority don’t have hot water, AC, etc. Smartphones are ubiquitous though.
Immortal lords from Facebook deciding how many rations of maize I’ll receive for the upvotes I earned today. Like, of course the Facebook lord will think he’s building Utopia. But from here, will it look much better than the utopia that the church and aristocracy collaborated to build in medieval Europe?
I don’t look to the future with hope as often as I look to the past with envy. Though I do both, from time to time.
Tomorrow people matter, in terms of leaving them a place in minimally decent conditions. That’s why when you die for a cause, you’re also dying so that tomorrow people can die less and suffer less. But in fact you’re not dying for unborn people—you’re dying for living ones from the future.
But to die to make room for others is simply to die for unborn people. Because them never being born is no tragedy—they never existed, so they never missed anything. But living people actually dying is a tragedy.
And I’m not against the fact that giving live is a great gift. Or should I say, it could be a great gift, if this world was at least acceptable, which it’s far from being. It’s just that not giving it doesn’t hold any negative value, it’s just neutral instead of positive. Whereas taking a life does hold negative value.
i appreciate all these ideas to ponder(my favorite pastime). Is the desire to live forever the ultimate manifestation of FOMO? Also, what to have for dinner, day after day into infinity would be a challenge.
“Death is the most terrible of all things; for it is the end, and nothing is thought to be any longer either good or bad for the dead.” — Aristotle, Nichomachean Ethics
I don’t think we need to answer these questions to agree that many people would prefer to live longer than they currently are able. I have no idea what problems need to be solved to enable people to happily and productively live thousands of years, but I also have no reason to believe they’re insurmountable.
“I don’t think we need to answer these questions to agree that many people would prefer to live longer than they currently are able.”
Certainly, but that’s not the issue here. The issue here is immortality. Many transhumanists desire to live forever, literally. Myself included. In fact I believe that many people in general do. Extending the human lifespan to thousands of years would be a great victory already, but that doesn’t invalidate the search for true immortality, if people are interested in such which I’m sure some are.
“I have no idea what problems need to be solved to enable people to happily and productively live thousands of years, but I also have no reason to believe they’re insurmountable.”
Once again you’re deviating from the question. I haven’t questioned the possibility of living a few thousand years, but forever properly.
This feels like a… strange framing to me.
The thing that appeals to me about “immortality” is not having to deal with the problem of my lifespan being limited.
Actual infinity/eternity is really long time. I mean, I could live for a million (or billion, or trillion) years, and it still would be an small fraction of eternity. An infinitesimal fraction, in fact. Epsilon.
Do I choose between being forced to exist forever, or to die after less than 100 years of existence? Neither. I’d like to have the option to keep living for as long as I want. I can see the scenario of even people living in utopia eventually growing tired/bored/unhappy with their existence. I can see the scenario of this never happening, a person choosing to truly live forever (one day/month/year at a time. not by pre-commiting to living forever).
“Do I choose between being forced to exist forever, or to die after less than 100 years of existence? Neither. I’d like to have the option to keep living for as long as I want.”
I didn’t mean being forced to exist forever, or pre-commiting to anything. I meant that I really do WANT to exist forever, yet I can’t see a way that it can work. That’s the dilemma that I mentioned: to die, ever, even after a gazillion years, feels horrible, because YOU will cease to exist, no matter after how much time. To never die feels just as horrible because I can’t see a way to remain sane after a very long time.
Who can guarantee that after x years you would feel satisfied and ready to die? I believe that as long as the brain remains healthy, it really doesn’t wanna die. And even if it could reach the state of accepting death, the current me just don’t ever wanna cease to exist. Doesn’t the idea of inevitably eventually ceasing to exist feel absolutely horrible to you?
No. There is nothing I find inherently scary or unpleasant about nonexistence.
I’m just confused about the details of why that would happen. I mean, it would be sad if some future utopia didn’t have a better solution for insanity or for having too many memories, than nonexistence.
Insanity: Look at the algorithm of my mind and see how it’s malfunctioning? If nothing else works, revert my mindstate back a few months/years?
Memories: offload into long-term storage?
“No. There is nothing I find inherently scary or unpleasant about nonexistence.”
Would you agree that you’re perhaps a minority? That most people are scared/depressed about their own mortality?
“I’m just confused about the details of why that would happen. I mean, it would be sad if some future utopia didn’t have a better solution for insanity or for having too many memories, than nonexistence.
Insanity: Look at the algorithm of my mind and see how it’s malfunctioning? If nothing else works, revert my mindstate back a few months/years?
Memories: offload into long-term storage?”
On insanity, computationalism might be false. Consciousness might not be algorithmic. If it is, you’re right, it’s probably easy to deal with.
But I suspect that excess memories might always remain a problem. Is it really possible to off-load them while maintaining personal identity? That’s an open question in my view.
Specially when, like me, you don’t really buy into computationalism.
I think that it will matter person to person. I’m the type of person that loves learning, and develops new interests all the time. I can’t imagine getting bored with life even after a few centuries. I’m forty-three years old. I can easily imagine three times my current lifetime.
I feel like my memory work similarly to how you describe. As time goes on, my memories are more abstract, with the freshest having the most detail. I don’t think that that is strictly necessary, but instead a limitation of our wetware.
This is just under the assumption that our brains will continue to function as they do now, which I don’t think is a great assumption. With augmented memory, we could have a complete personal record to access and augment our memory.
I’m sure that there will be people that choose to die, and others that choose to deadhead, and those that choose to live with only a rolling century of memories. None of those sound appealing to my imagination at this point.
″ I can’t imagine getting bored with life even after a few centuries.”
Ok, but that’s not a lot of time, is it? Furthermore, this isn’t even a question of time. For me, no finite time is enough. It’s the mere fact of ceasing to exist. Isn’t it simply horrifying? Even if you live a million healthy years, no matter, the fact is that you will cease to exist one day. And then there will be two options. Either your brain will be “healthy” and therefore will dread death as much as it does now, or it will be “unhealthy” and welcome death to relieve it’s poor condition. To me both options seem horrifying—what matters more is that the present me wants to live forever with a healthy, not tired of life, not with dementia or Alzheimer’s, sane brain forever.
Like I said in my post, as time tends to infinite, so do memories, so there will inevitably be a time where the brain no longer holds up. Even the best brain that science could buy in a far future, so to speak. It seems that we will have to cease to exist inevitably.
“I’m sure that there will be people that choose to die, and others that choose to deadhead, and those that choose to live with only a rolling century of memories. None of those sound appealing to my imagination at this point.”
The latter actually sounds kinda appealing to me, if only I was more convinced that it was possible...
Just take it a century at a time.
Let me put on my sciency-sounding mystical speculation hat:
Under the predictive processing framework, the cortex’s only goal is to minimize prediction error (surprise). This happens in a hierarchical way, with predictions going down and evidence going up, and upper levels of the hierarchy are more abstract, with less spatial and temporal detail.
A visual example: when you stare at a white wall, nothing seems to change, even though the raw visual perceptions change all the time due to light conditions and whatnot. This is because all the observations are consistent with the predictions.
As the brain learns more, you get less and less surprise, and the patterns you see are more and more regular. A small child can play the same game a hundred times and it’s still funny, but adults often see the first episode of a TV show and immediately lose interest because “it’s just another mystery show, nothing new under the sun”.
This means that your internal experience becomes ever more stable. This could explain why time seems to pass much faster the older you get.
Maybe, after you live long enough, your posthuman mind accumulates enough knowledge, and gets even less surprised, you eventually understand everything that is to be understood. Your internal experience is something like “The universe is temporally evolving according to the laws of physics, nothing new under the sun”.
At which moment your perception of time stops completely, and your consciousness becomes a reflection of the true nature of the universe, timeless and eternal.
I think that’s what I would try to do with infinite time, after I get bored of playing videogames.
If continued conscious existence were just a matter of piling on more & more baggage… then yeah, I’d totally agree, there’s an issue here. It’s one facet of deathism’s hidden truth that immortalists seem to be impressively bad at recognizing.
(Speaking from having been caught in that loop for several decades.)
But I don’t think it has to be this way at all. I don’t think consciousness is inherently tiring.
I feel vastly more conscious now than I was 20 years ago, for sure. I also have way more energy and energetic stability than I did 20 years ago. And it’s obvious to me how they’re connected. I’ve consciously taken off a lot of baggage in that time, especially in the last five years.
I think what you’re observing here is that behavioral loops that don’t adapt to context are draining and are guaranteed to end at some point. I mean this in the sense of “God damn it, here I go again having exactly the same relationship problem with yet another person.” Either you find a way to escape these loops, or they eventually kill you.[1]
But there’s totally a skill of breaking out of those loops, and you can get better at that over time rather than worse. As long as your skill growth with them exceeds the rate at which you take on baggage, you’re good!
If you buy all that, then the thing to zoom in on is why you feel like consciousness is tiring. That’ll give you your hint about what structures you’re carrying around in you that are sort of choking out your vitality.
And with all that said… eternity is way longer than I hear basically anyone acknowledging. When I first grokked what quite literally living truly forever would be like, I was horrified. I think that horror is largely about how my current setup relates to eternity. So that’s part of what I’d need to reconfigure in myself if I were to become truly immortal and enjoy it. I so very rarely hear immortalists talk about this, so I’m tempted to think most of them haven’t yet tasted that scale in an intimate way.
I’m skipping basically the whole causal engine here. A quick sketch: Maintaining a loop in defiance of its fit to context takes energy, which builds a kind of tax on your cognition and emotional systems. This in turn creates something kind of like technical debt at the metabolic level. This is kind of like saying “Stress is bad for your health.”
You’re coming from a psychologic/spiritual point of view, which is valid. But I think you should perhaps consider a bit more the scientific perspective. Why do people get Alzheimer’s and dementia at old age? Because the brain fails to keep up with all the experience/memory accumulation. The brain gets worn out, basically. My concern is more scientific than anything. Even with the best psychotherapy or the best meditation or even the best brain tinkering possible, as time tends to infinite so do memories and so does “work” for the brain to do, and unfortunately the brain is finite, so it will invariably get overwhelmed eventually.
Like, I don’t doubt that in a few centuries or millenia we would have invented the technology to no longer get our brains worn out at age 100 but only at 1000 or 5000, but I don’t think we’ll ever invent the technology to avoid it past age 1 billion (just a gross estimate of course).
Personally, I’m only 30 and I don’t feel tired of living at all, fortunately. Like I said, I wanna live forever. But both my intuition and these scientific considerations tell me that it can’t remain like that indefinitely.
This is factually false, as well as highly misleading.
I think it is factually correct that we get Alzheimer’s and dementia at old age because the brain gets worn out. Whether it is because of failing to keep up with all the memory accumulation could be more speculative. So I admit that I shouldn’t have made that claim.
But the brain gets worn out from what? Doing its job. And what’s its job...?
Anyway, I think it would be more productive to at least present an explanation in a couple of lines rather than only saying that I’m wrong.
Alzheimer’s is a hardware problem, not a software one. You’re describing a software failure: failure to keep up with experience. If that is a thing, Alzheimer’s isn’t evidence for it.
Can we really separate them? I’m sure that the limitations of consciousness (software) have a physical base (hardware). I’m sure we could find the physical correlates of “failure to keep up with experience”, as well as we could find the physical correlates of why someone who doesn’t sleep for a few days starts failing to keep up with experience as well.
It all translates down to hardware at the end.
But anyway I’ll say again that I admitted it was speculative and not the best example.
We don’t really know how memory works, do we? Most memories fade away with time, except some really strong ones, which are constantly renewed, slightly changed each time. So, odds are, you would not need to explicitly delete anything, it fades away with disuse. And yet, most people* maintain the illusion of personal identity without any effort. So, I don’t expect memory accumulation to be an obstacle to eternal youth. Also, plenty of time to work on brain augmentations and memory offloading to external storage :)
“So, odds are, you would not need to explicitly delete anything, it fades away with disuse.”
I don’t know. Even some old people feel overwhelmed with so many memories. The brain does some clean-up, for sure. But I doubt whether it would work for really long timelines.
“So, I don’t expect memory accumulation to be an obstacle to eternal youth. Also, plenty of time to work on brain augmentations and memory offloading to external storage :)”
Mind you that your personal identity is dependent on your “story”, which has to encompass all your life, even if only the very key moments. My concern is that, as time tends to infinite, so does the “backbone” of memory, i.e. the minimum necessary, after “trimming” with the best technology possible, to maintain personal identity which should include memories from all along the timeline. So the question is whether a finite apparatus, the brain, can keep up with infinite data. Is it possible to remain you while not remembering all your story? And I don’t even care about not remembering all my story. I just care about remaining me.
We know that a computer can theoretically function forever. Just keep repairing its parts (same could be done with the brain, no doubt) and deleting excess data. But the computer doesn’t have a consciousness / personal identity which is dependent on its data. So computationalism might be leading us astray here. (Yes, I don’t like computationalism/functionalism.)
Note: this is all speculation, of which I’m quite uncertain of. Before all the downvotes come (not that I care, but just to make it clear anyway).
I have a feeling that while you understand the wrongness of deathism on an intellectual level it still rings true for you emotionally. The phrases such as
or
seem to be just the conventional deathist wisdom indcrinated in people from childhood. Try to grapple with this statements. Why exactly are you pessemistic, instead of optimistic about the matter?
It does ring true to me a bit. How could it not, when one cannot imagine a way to exist forever with sanity? Have you ever stopped to imagine, just relying on your intuition, what would be like to live for a quadrillion years? I’m not talking about a cute few thousand like most people imagine when we talk about immortality. I’m talking about proper gazillions, so to speak. Doesn’t it scare the sh*t out of you? Just like Valentine says in his comment, it’s curious how very few transhumanists have ever stopped to stare at this abyss.
On the other hand I don’t think anyone hates death more than me. It truly makes me utterly depressed and hopeless. It’s just that I don’t see any possible alternative to it. That’s why I’m pessimistic about the matter—both my intuition and reasoning really point to the idea that it’s technological impossible for any conscious being to exist for a quadrillion years, although not to 100% certainty. Maybe 70-80%.
The ideal situation was that we lived forever but only ever remembered a short amount of time, so that we would always feel “fresh” (i.e. not go totally insane). I’m just not sure if that’s possible.
My intuition doesn’t differ much whether it’s a thousand or a quadrillion years. I’m feeling enthusiastic to try to make it work out, instead of being afraid that it won’t.
It’s true that I lack the gear-level model explainig how it’s possible for me to exist for quadrillion years. But neither do I have a gear-level model, explaining how it’s impossible. I know that I still have some confusion about consciousness and identity, but this doesn’t allow me shift the probability either way. For every argument “what if it’s impossible to do x and x is required to exist for quadrillion years” I can automatically construct counter arguments like “what if it’s actually possible to do x” or “what if x is not required”. How do you manage to get 70-80% confidence level here? This sounds overconfident to me.
“I’m feeling enthusiastic to try to make it work out, instead of being afraid that it won’t.”
Well, for someone who’s accusing me of emotionally still defending a wrong mainstream norm (deathism), you’re also doing it yourself by espousing empty positivism. Is it honest to feel enthusiastic about something when your probabilities are grim? The probabilities should come first, not how you feel about it.
“It’s true that I lack the gear-level model explainig how it’s possible for me to exist for quadrillion years.”
Well I do have one to prove the opposite: the brain is finite, and as time tends to infinite so do memories, and it might be impossible to trim memories like we do in a computer without destroying the self.
“For every argument “what if it’s impossible to do x and x is required to exist for quadrillion years” I can automatically construct counter arguments like “what if it’s actually possible to do x” or “what if x is not required”.”
That’s fine! Are we allowed to have different opinions?
“How do you manage to get 70-80% confidence level here? This sounds overconfident to me.”
Could be. I’ll admit that it’s a prediction based more on intuition than reasoning, so it’s not of the highest value anyway.
I wasn’t really planning it as an accusation. It was supposed to be a potentially helpful hint at the source of your cognitive dissonance. Sorry, it seems that I failed to convey it properly.
Previously you mentioned being scared due to imagining to live for a quadrillion years. I thought it would be appropriate to share my own emotional reaction as well. I agree that probabilities should go first. And that’s the thing I do not see them as grim. For me it’s more or less 50-50. I’m not competent enough regarding future scientific discoveries and true laws of nature to shift them from this baseline. And I doubt anyone of currently living really is. That’s why I’m surprised by your pessimism.
You may notice that the whole argument is based on “it might be impossible”. I agree that it can be the case. But I don’t see how it’s more likely than “it might be possible”.
“You may notice that the whole argument is based on “it might be impossible”. I agree that it can be the case. But I don’t see how it’s more likely than “it might be possible”.”
I never said anything to the contrary. Are we allowed to discuss things that we’re not sure whether it “might be possible” or not? It seems that you’re against this.
I have many problems, but boredom has never been one of them, not even one second. So maybe boredom could be programmed out from the AI software somehow? Once you have a digital mind you could make it feel whatever you wanted. I wouldn’t even mind being stuck in an eternal time loop of some sort.
The concern here is not boredom. I even believe that boredom could be solved with some perfect drug or whatever. The concern here is whether a consciousness identity can properly exist forever without inevitably degrading.
As well as any other file. Error correction can protect digital data for countless eons.
Perhaps our main difference is that you seem to believe in computationalism, while I don’t. I think consciousness is something fundamentally different from a computer program or any other kind of information. It’s experience, which is beyond information.
Draw a boundary around the part of your brain that apparently contains more than compute because it produces those sentences. This presumably excludes your visual cortex, your episodic memory, and some other parts. There are now machine models that can recognize faces with mere compute, so probably the part of you that suggests that a cloud looks like a face is also on the outside. I expect you could produce that sense of having experience even if you didn’t have language to put it into words, so we should be able to pull your language cortex out of the boundary without pulling out anything but compute.
The outside only works in terms of information. It increasingly looks like you can shrink the boundary until you could replace the inside with a rock that says “I sure seem to be having experiences.”, without any changes to what information crosses the boundary. Whatever purpose evolution might have had for equipping us with such a sense, it seems easier for it to put in an illusion than to actually implement something that, to all appearances, isn’t made of atoms.
“There are now machine models that can recognize faces with mere compute, so probably the part of you that suggests that a cloud looks like a face is also on the outside.”
Modern computers could theoretically do anything that a human does, except experience it. I can’t draw a line around the part of my brain responsible for it because there is probably none, it’s all of it. Even though I’m no neurologist. But from the little I know the brain has an integrated architecture.
Maybe in the future we could make conscious silicon machines (or of whatever material), but I still maintain that the brain is not a Turing machine—or at least not only.
“The outside only works in terms of information.”
Could be. The mind processes information, but it is not information (this is an intuitive opinion, and so is yours).
“Whatever purpose evolution might have had for equipping us with such a sense, it seems easier for it to put in an illusion than to actually implement something that, to all appearances, isn’t made of atoms.”
Now we’ve arrived at my favorite part of the computationalist discourse: to claim or suggest that consciousness is an illusion. I think that all that can’t be an illusion is consciousness. All that certainly exists is consciousness.
As for being made of atoms or not, well, information isn’t, either. But it’s expressed by atoms, and so is consciousness.
If one might make a conscious being out of Silicon but not out of a Turing machine, what happens when you run the laws of physics on a Turing machine and have simulated humans arise for the same reason they did in our universe, which have conversations like ours?
What do you mean by “certainly exists”? One sure could subject someone to an illusion that he is not being subjected to an illusion.
“if one might make a conscious being out of Silicon but not out of a Turing machine”
I also doubt that btw.
“what happens when you run the laws of physics on a Turing machine and have simulated humans arise”
Is physics computable? That’s an open question.
And more importantly, there’s no guarantee that the laws of physics would necessarily generate conscious beings.
Even if it did, could be p-zombies.
“What do you mean by “certainly exists”? One sure could subject someone to an illusion that he is not being subjected to an illusion.”
True. But as long as you have someone, it’s no longer an illusion. It’s like, if you stimulate your pleasure centers with an electrode, and you say “hmmm that feels good”, was the pleasure an illusion? No. It may have been physically an illusion, but not experientially, and the latter is what really matters. Experience is what really matters, or is at least enough to make something real. That consciousness exists is undeniable. “I think, therefore I am.” Experience is the basis of all fact.
Do you agree that there is a set of equations that precisely describes the universe? You can compute the solutions for any system of differential equations through an infinite series of ever finer approximations.
The Turing machine might calculate the entire tree of all timelines, including this conversation. Do you suggest that there is a manner in which one can run a universe, that only starts to make a difference once life gets far enough, without which the people in it would fail to talk about consciousness?
If we wrote out a complete log of that tree on a ludicrously large piece of paper, and then walked over to the portion of it that describes this conversation, I am not claiming that we should treat the transcript as something worth protecting. I’m claiming that whatever the characters in the transcript have, that’s all we have.
Still, that could all happen with philosophical zombies. A computer agent (AI) doesn’t sleep and can function forever. These 2 factors is what leads me to believe that computers, as we currently define them, won’t ever be alive, even if they ever come to emulate the world perfectly. At best they’ll produce p-zombies.
I’m sorry to see this downvoted to the negatives. I don’t agree with the perspective, but I like the questions it’s raising. And I think they fit in LW’s genre.
Like, I receive this as the author’s main “point”:
I really hope the downvotes aren’t just because people disagree that neverending consciousness might be horrid. I take it as a serious inquiry.
I have my own thoughts on the meat of it. I’ll make that a separate reply though, for threading+voting purposes.
Thanks for the feedback (and the back-up). Well, I’d say that half of what I write on Lesswrong is downvoted and 40% is ignored, so I don’t really care at this point. I don’t think (most of) my opinions are outlandish. I never downvote anything I disagree with myself, and there’s plenty that I disagree with. I only downvote absurd or rude comments. I think that’s the way it should be, so I’m in full agreement on that.
You also got it right on my main point. That’s precisely it. Mind you that “ending” consciousness also feels horrid to me! That’s the dilemma. Would be great if we could find a way to achieve neverending consciousness without it being horrid.
Just have kids. Whatever posttranshuman creature inherits the ghost of your body in a thousand years won’t be “you” in any sense beyond the pettiest interpretation of ego as “continuous memory”, and even that falls apart quickly under scrutiny.
Your offspring are as much “you” as that thousand year ego projection. Except they’re better, because they start a whole new fresh consciousness unfettered with your accumulated prejudices. I advocate we raise children better, wiser, and accept death younger and more usefully.
“Whatever posttranshuman creature inherits the ghost of your body in a thousand years won’t be “you” in any sense beyond the pettiest interpretation of ego as “continuous memory”″
I used to buy into that Buddhist perspective, but I no longer do. I think that’s a sedative, like all religions. Though I will admit that I still meditate, because I still hope to find out that I’m wrong. I hope I do, but I don’t have a lot of hope. My reason and intuition are clear in telling me that the self is extremely valuable, both mine and that of all other conscious beings, and death is a mistake.
Unless you mean to say that they will only be a clone of me. Then you’re right, a clone of me is not me at all, even if it feels exactly like me. But then we would have just failed at life extension anyway. Who’s interested in getting an immortal clone? People are interested in living forever themselves, not someone else. At least if they’re being honest.
“Your offspring are as much “you” as that thousand year ego projection. ”
I’ve been alive for 30 years—not much, I admit, but I still feel as much like me as in the first day that I can remember. I suspect that as long as the brain remains healthy, that will remain so. But I never ever felt “me” in any other conscious being. Again, Buddhist projection. Sedative. Each conscious being is irreplaceable.
>Each conscious being is irreplaceable.
Right, that’s my point—the conscious being of your childhood is not replaceable with you now. You are a clone of your dead childhood self. That’s fine for you, the clone. But who’s interested in getting a 30-year-old clone? And the many consciousnesses that flower and die every few decades, will be replaced with the single continuation of a single generation that stumbles into immortality.
>I think that’s a sedative, like all religions
I’m not Buddhist, but your critique extends to yourself. If you take one step back to look at an even broader picture, by replacing religion with ideology, you’ve just reinvented postmodernism. Viz: “Transhumanism, sedative, like all ideologies.” So you either stick with modernism (that transhumanism is the one, special ideology immune from humanity’s tragic need to self-sedate), or dive into the void (which really is just an ocean of sedatives swirling together, I’ve swum deep and I promise it’s ideology all the way down).
Maybe we’re not self-sedating at all, and we can talk to a pharmacist who isn’t just us. It’s hard to say anything about reality when the only thing you know is that you’re high af all the time.
>I’ve been alive for 30 years—not much, I admit, but I still feel as much like me as in the first day that I can remember.
Every day the same sun rises, yet it’s a different day. You aren’t the sun, you’re the day.
Imagine droplets of water trapped in a cup, then poured back into the ocean. Water is consciousness, your mind is the cup.
Each century, the day ends, but your family continues. Every few centuries your family falls apart, but your community thrives. Each millenium your community is ravaged, but your nation lives.
We could go on, to the species, the community of all conscious beings, etc etc. Where you place the self along this line is historically variable. You place it with the continuity of the ego. I place it with my family. There are a zillion answers to this question. Buddhism would say the attempt to answer this question is itself the problem. But I’m not Buddhist.
To meaningfully debate this, we’d have to find out what consciousness is. The academy already moves at a snail pace as science continues to progress one funeral at a time, awaiting the death of precious selves whose lives already had negative value decades ago. Imagine if their reign extended infinitely. But for the grace of Death might we soon unlock Immortality.
“You are a clone of your dead childhood self.”
Yes, that’s a typical Buddhist-like statement, that we die and are reborn each instant. But I think it’s just incorrect—my childhood self never died. He’s alive right now, here. When I die the biological death, then I will stop existing. It’s as simple as that. Yet I feel like Buddhists, or Eastern religion in general, does this and other mental gymnastics to comfort people.
“So you either stick with modernism (that transhumanism is the one, special ideology immune from humanity’s tragic need to self-sedate), or dive into the void”
There are self-sedating transhumanists, for sure. Like, if you think there isn’t a relevant probability that immortality just won’t work, or if you’re optimistic about the AI control problem, you’re definitely a self-sedating transhumanist. I try to not be one as much as possible, but maybe I am in some areas—no one’s perfect.
But it’s pretty clear that there’s a big difference between transhumanism and religions. The former relies on science to propose solutions to our problems, while the later is based on the teachings of prophets, people who thought that their personal intuitions were the absolute truth. And, in terms of self-sedating ideas, if transhumanism is a small grain of Valium, religion is a big fat tab.
“It’s hard to say anything about reality when the only thing you know is that you’re high af all the time.”
I agree. I claim uncertainty on all my claims.
“Every day the same sun rises, yet it’s a different day. You aren’t the sun, you’re the day.
Imagine droplets of water trapped in a cup, then poured back into the ocean. Water is consciousness, your mind is the cup.”
Yeah, yeah, yeah, I know, I know, I’ve heard the story a thousand times. There’s only one indivisible self/consciousness/being, we’re just an instance of it. Well, you can believe that if you want, I don’t have the scientific evidence to disprove it. But neither have you the evidence to prove it, so I can also disbelieve it. My intuition clearly disbelieves it. When I die biologically it will be blackout. It’s cruel af.
“Imagine if their reign extended infinitely. But for the grace of Death might we soon unlock Immortality.”
Either too deep or I’m too dumb, didn’t quite get it. Please explain less poetically.
I don’t think there is anything particularly scientific about transhumanism relative to other ideologies. They use science to achieve their goals, much like Catholics use science to prove that fetuses have heart beats or whatever.
Really, this debate feels like it boils down to an individualistic vs collectivistic sense of self. In the collectivist view, dying is not that big of a deal. You can see this in action, when dying for your family, country, etc is seen as noble and great. Whereas an individual sacrificing their family to preserve themselves is less lauded (except in Individualist propaganda, where we’re scolded for “judging” and supposed to “understand” the individual circumstances and so on).
I mean, you yourself say it, we have no idea what consciousness even is. Or if it’s valuable at all. We’re just going on a bunch of arbitrary intuitions here. Well, that’s a bad standard. And it’s not like we’re running out of people, we can just create them as needed, indefinitely. So given that
we have a lot of humans, they aren’t a particularly limited resource, and
few if any of the people have super unique, amazing perspectives, such that we really need to preserve that one person for extra time
Why not focus our energy on figuring out what we are, and decide the best course of action from there?
It’s only cruel if you’ve been brainwashed into thinking your life matters. If you never thought that, it’s just normal. Accept your place and die so that our descendants (who we should work really hard to help so they turn out better than we are, and thus deserving of immortality should we ever actually attain it).
But then, if we’ve figured out how to make such amazing people, why not let there be lots of different ones, so they can flourish across many generations, instead of having just one generation forever? I mean, there isn’t even that much to do. Well, there probably is, I’m just too basic to understand it because of my limited natural brain.
I guess transhumanism overall is really cool, taken as a whole package. It’s just the life-extension part I find silly, especially if it’s a priority rather than an afterthought. But even if you want transhumanism, aren’t we far enough from it that the best path towards it is just raising better (smarter, more cooperative, etc) children? Seems like the biggest hindrance to scientific progress is just the state of the quality of human beings.
>Either too deep or I’m too dumb, didn’t quite get it. Please explain less poetically.
Poorly worded on my part. I just mean that it’s thanks to death that we get progress. The old die, and with them out of the way, the young can progress. Lots of fresh new perspectives are a feature of death.
To each paragraph:
Totally unfair comparison. Do you really think that immortality and utopia are frivolous goals? So maybe you don’t really believe in cryonics or something. Well, I don’t either. But transhumanism is way more than that. I think that its goals with AI and life extension are all but a joke.
That’s reductive. As an altruist, I care about all other conscious being. Of course maintaining sanity demands some distancing, but that’s that. So I’d say I’m a collectivist. But one person doesn’t substitute the other. Others continuing to live will never make up for those who die. The act of ceasing to exist if of the utmost cruelty and there’s nothing that can compensate that.
I have no idea of what consciousness is scientifically, but morally I’m pretty sure it is valuable. All morality comes from the seeking of well-being for the conscious being. So if there’s any value system, consciousness must be at the center. There’s not much explaining here needed, it’s just that everyone wants to be well—and to be.
Like I said every conscious being wants to exist. It’s just the way we’ve been programmed. All beings matter, myself included. I goddamn want to live, that is the basis of all wants and of all rights. Have I been brainwashed? Religions have been brainwashing people about the exact opposite for millenia, that death is ok, either because we go to heaven according to the West, or because we’ll reincarnate or we’re part of a whole according to the East. So, quite on the contrary, I think I have been de-brainwashed.
An unborn person isn’t a tragedy. A death one is. So it’s much more important to care about the living than the unborn.
If most people are saying that AGI is decade(s) off then we aren’t that far.
As for raising children as best as we can I think that’s just common sense.
I partly agree. It would be horrible if Genghis Khan or Hitler never died. But we could always put them in a really good prison. I just don’t wanna die and I think no minimally decent person deserves to, just so we can get rid of a few psychopaths.
Also we’re talking about immortality not now, but in a technological utopia, since only such could produce it. So the dynamics would be different.
As for fresh new perspectives, in this post I propose selective memory deletion with immortality. So that would contribute to that. Even then, getting fresh new perspectives is pretty good, but nowhere near being worth the ceasing of trillions of consciousnesses.
Individualism and altruism aren’t exclusive. I didn’t mean to imply you are selfish, just that your operating definition of self seems informed by a particular tradition.
Consider the perspective of liberal republicans of the 19th century who fought and died for their nation (because that’s where they decided, or were taught, to center their self). Each nation is completely unique and irreplaceable, so we must fight to keep nations thriving and alive, and prevent their extinction. Dying for patriotism is glorious, honorable, etc.
But that’s my point, consciousness will go on just fine without either of us specifically being here. Ending one conscious experience from one body so that a different one can happen seems fine to me, for the most part. I dunno the philosophical implications of this, just thinking.
Yeah, it’s exciting for sure.
I’m 30 as well, so I’ll be near death in the decades that likely begin to birth AGI. But it would likely be able to fathom things unfathomable to us, who knows. History beyond that point is a black hole for me. It’s all basilisks and space jam past 2050 as far as I’m concerned :)
Well, I guess that’s it, huh? I don’t think so, but clearly a lot of people do. Btw I’m new to this community, so sorry if I’m uninformed on issues that are well hashed out here. What a fun place, though.
I can see the altruism in dying for a cause. But it’s a leap of faith to claim, from there, that there’s altruism in dying by itself. To die why, to make room for others to get born? Unborn beings don’t exist, they are not moral patients. It would be perfectly fine if no one else was born from now on—in fact it would be better than even 1 single person dying.
Furthermore, if we’re trying to create a technological mature society capable of discovering immortality, perhaps much sooner will it be capable of colonizing other planets. So there are trillions of empty planets to put all the new people before we have to start taking out the old ones.
To die to make room for others just doesn’t make any sense.
“consciousness will go on just fine without either of us specifically being here”
It sure will. But that’s like saying that money will go on just fine if you go bankrupt. I mean, sure, the world will still be full of wealth, but that won’t make you any less poor. Now imagine this happening to everyone inevitably. Sounds really sh*tty to me.
“Btw I’m new to this community,”
Welcome then!
Well, okay, but why? Why don’t tomorrow people matter at all? Is there a real moral normativity that dictates this, or are we just saying our feelings to each other? I don’t mean that condescendingly, just trying to understand where you’re coming from when you make this claim.
But I’m arguing for something different from altruism. I go further by saying that the approach to constructing a sense of self differs substantively between people, cultures, etc. Someone who dies for their nation might not be altruistic per se, if they have located their identity primarily in the nation. In other words, they are being selfish, not as their person, but as their nation.
Does that make sense?
Granted, your point about interstellar travel makes all of this irrelevant. But I’m much more cynical about humanity’s future. Or at least, the future of the humans I actually see around me. Technology here is so behind. Growing your own food as a rational way to supplement income is common, education ends for most people at age 12, the vast majority don’t have hot water, AC, etc. Smartphones are ubiquitous though.
Immortal lords from Facebook deciding how many rations of maize I’ll receive for the upvotes I earned today. Like, of course the Facebook lord will think he’s building Utopia. But from here, will it look much better than the utopia that the church and aristocracy collaborated to build in medieval Europe?
I don’t look to the future with hope as often as I look to the past with envy. Though I do both, from time to time.
Tomorrow people matter, in terms of leaving them a place in minimally decent conditions. That’s why when you die for a cause, you’re also dying so that tomorrow people can die less and suffer less. But in fact you’re not dying for unborn people—you’re dying for living ones from the future.
But to die to make room for others is simply to die for unborn people. Because them never being born is no tragedy—they never existed, so they never missed anything. But living people actually dying is a tragedy.
And I’m not against the fact that giving live is a great gift. Or should I say, it could be a great gift, if this world was at least acceptable, which it’s far from being. It’s just that not giving it doesn’t hold any negative value, it’s just neutral instead of positive. Whereas taking a life does hold negative value.
It’s as simple as that.
i appreciate all these ideas to ponder(my favorite pastime). Is the desire to live forever the ultimate manifestation of FOMO? Also, what to have for dinner, day after day into infinity would be a challenge.
Soylent on most of the days. Sometimes something different, if you are in a mood for a nice meal.
At least this problem with immortality is already solved.
More like COMO: Certainty of Missing Out.
“Death is the most terrible of all things; for it is the end, and nothing is thought to be any longer either good or bad for the dead.” — Aristotle, Nichomachean Ethics