The tone of strong desirability for progress on WBE in this was surprising to me. The author seems to treat progress in WBE as a highly desirable thing; a perspective I expect most on LW do not endorse.
The lack of progress here may be a quite good thing.
As many other people here, I strongly desire to avoid death. Mind uploading is an efficient way to prevent many causes of death, as it is could make a mind practically indestructible (thanks to backups, distributed computing etc). WBE is a path towards mind uploading, and thus is desirable too.
Mind uploading could help mitigate OR increase the AI X-risk, depending on circumstances and implementation details. And the benefits of uploading as a mitigation tool seem to greatly outweigh the risks.
The most preferable future for me is the future there mind uploading is ubiquitous, while X-risk is avoided.
Although unlikely, it is still possible that mind uploading will emerge sooner than AGI. Such a future is much more desirable than the future without mind uploading (some possible scenarios).
This really depends on whether you believe a mind-upload retains the same conscious agent from the original brain. If it did, we would need to solve the hard problem of consciousness, which seems significantly harder than just WBE. The delay between solving WBE and the hard problem of consciousness is so vast in my opinion that being excited for mind-uploading when WBE progress is made is like being excited for self-propelled cars after making progress in developing horse-drawn wagons. In both cases, little progress has been made on the most significant component of the desired thing.
If it did, we would need to solve the hard problem of consciousness, which seems significantly harder than just WBE.
Doesn’t WBE involve the easy rather than hard problem of consciousness? You don’t need to solve why anything is conscious in the first place, because you can just take it as a given that human brains are conscious and re-implement the computational and biological mechanisms that are relevant for their consciousness.
You don’t need to solve why anything is conscious in the first place, because you can just take it as a given that human brains are conscious and re-implement the computational and biological mechanisms that are relevant for their consciousness.
I’m pretty sure the problem with this is that we don’t know what it is about the human brain that gives rise to consciousness, and therefore we don’t know whether we are actually emulating the consciousness-generating thing when we do WBE. Human conscious experience could be the biological computation of neurons + X. We might be able to emulate biological computation perfectly, but if X is necessary for conscious experience then we’ve just created a philosophical zombie. To find out whether our emulation is sufficient to produce consciousness, we would need to find out what X is and how to emulate it. I’m pretty sure this is exactly the hard problem of consciousness.
Even if biological computation is sufficient for generating consciousness, we will have no way of knowing until we solve the hard problem of consciousness.
Human conscious experience could be the biological computation of neurons + X. We might be able to emulate biological computation perfectly, but if X is necessary for conscious experience then we’ve just created a philosophical zombie.
David Chalmers had a pretty convincing (to me) argument for why it feels very implausible that an upload with identical behavior and functional organization to the biological brain wouldn’t be conscious (the central argument starts from the subheading “3 Fading Qualia”): http://consc.net/papers/qualia.html
What a great read! I suppose I’m not convinced that Fading Qualia is an empirical impossibility, and therefore that there exists a moment of Suddenly Disappearing Qualia when the last neuron is replaced with a silicon chip. If consciousness is quantized (just like other things in the universe), then there is nothing wrong in principle with Suddenly Disappearing Qualia when a single quantum of qualia is removed from a system with no other qualia, just like removing the last photon from a vacuum.
Joe is an interesting character which Chalmers thinks is implausible, but aside from it rubbing up against a faint intuition, I have no reason to believe that Joe is experiencing Fading Qualia. There is no indication for any reason that the workings of consciousness should obey any intuitions we may have about it.
There is no indication for any reason that the workings of consciousness should obey any intuitions we may have about it.
The mind is an evolved system out to do stuff efficiently, not just a completely inscrutable object of philosophical analysis. It’s likelier that the parts like sensible cognition and qualia and the subjective feeling of consciousness are coupled and need each other to work than that they were somehow intrinsically disconnected and cognition could go on as usual without subjective consciousness using anything close to the same architecture. If that were the case, we’d have the additional questions of how consciousness evolved to be a part of the system to begin with and why hasn’t it evolved out of living biological humans.
I agree with you, though I personally wouldn’t classify this as purely an intuition since it is informed by reasoning which itself was gathered from scientific knowledge about the world. Chalmers doesn’t think that Joe could exist because it doesn’t seem right to him. You believe your statement because you know some scientific truths about how things in our world come to be (i.e. natural selection) and use this knowledge to reason about other things that exist in the world (consciousness), not merely because the assertion seems right to you.
The brain is changing over time. It is likely that there is not a single atom in your 2021-brain that was present in your 2011-brain.
If you agree that the natural replacements haven’t killed you (2011-you and 2021-you are the same conscious agent), then it’s possible to transfer your mind to a machine in a similar manner. Because you’ve already survived a mind uploading into a new brain.
Gradual mind uploading (e.g. by gradually replacing neurons with emulated replicas) circumvents the philosophical problems attributed to non-gradual methods.
Personally, although I prefer gradual uploading, I would agree to a non-gradual method too, as I don’t see the philosophical problems as important. As per the Newton’s Flaming Laser Sword:
if a question, even in principle, can’t be resolved by an experiment, then it is not worth considering.
If a machine behaves like me, it is me. Whatever we share some unmeasurable sameness—is of no importance for me.
The brain is but a computing device. You give it inputs, and it returns outputs. There is nothing beyond that. For all practical purposes, if two devices have the same inputs→outputs mapping, you can replace one of them with another.
As Dennett put it, everyone is a philosophicalzombie.
There are a lot of interesting points here, but I disagree (or am hesitant to agree) with most of them.
If you agree that the natural replacements haven’t killed you (2011-you and 2021-you are the same conscious agent), then it’s possible to transfer your mind to a machine in a similar manner. Because you’ve already survived a mind uploading into a new brain.
Of course, I’m not disputing whether mind-uploading is theoretically possible. It seems likely that it is, although it will probably be extremely complex. There’s something to be said about the substrate independence of computation and, separately, consciousness. No, my brain today does not contain the same atoms as my brain from ten years ago. However, certain properties of the atoms (including the states of their constituent parts) may be conserved such as spin, charge, entanglement, or some yet undiscovered state of matter. So long as we are unaware of the constraints on these properties that are necessary for consciousness (or even whether these properties are relevant to consciousness), we cannot know with certainty that we have uploaded a conscious mind.
If a machine behaves like me, it is me. Whatever we share some unmeasurable sameness—is of no importance for me.
The brain is but a computing device. You give it inputs, and it returns outputs. There is nothing beyond that. For all practical purposes, if two devices have the same inputs→outputs mapping, you can replace one of them with another.
These statements are ringing some loud alarm bells for me. It seems that you are rejecting consciousness itself. I suppose you could do that, but I don’t think any reasonable person would agree with you. To truly gauge whether you believe you are conscious or not, ask yourself, “have I ever experienced pain?” If you believe the answer to that is “yes,” then at least you should be convinced that you are conscious.
What you are suggesting at the end there is that WBE = mind uploading. I’m not sure many people would agree with that assertion.
No, my brain today does not contain the same atoms as my brain from ten years ago. However, certain properties of the atoms (including the states of their constituent parts) may be conserved such as spin, charge, entanglement, or some yet undiscovered state of matter. So long as we are unaware of the constraints on these properties that are necessary for consciousness (or even whether these properties are relevant to consciousness), we cannot know with certainty that we have uploaded a conscious mind.
Can we know with certainty that the same properties were preserved between 2011-brain and 2021-brain?
It seems to me that this can’t be verified by any experiment, and thus must be cut off by the Newton’s Flaming Laser Sword.
It seems that you are rejecting consciousness itself.
As far as I know, it is impossible to experimentally verify if some entity posses consciousness (partly because how fuzzy are its definitions). It is a strong indicator that consciousness is one of those abstractions that don’t correspond to any real phenomenon.
“have I ever experienced pain?”
If certain kinds of damage are inflicted upon my body, my brain generates an output typical for a human in pain. The reaction can be experimentally verified. It also has a reasonable biological explanation, and a clear mechanism of functioning. Thus, I have no doubt that pain does exist, and I’ve experienced it.
I can’t say the same about any introspection-based observations that can’t be experimentally verified. The human brain is a notoriously unreliable computing device which is known to produce many falsehoods about the world and (especially!) about itself.
Can we know with certainty that the same properties were preserved between 2011-brain and 2021-brain?
No, we cannot. Just as we cannot know with certainty whether a mind-upload is conscious. Just because we presume that our 2021 brain is a related conscious agent to our 2011 brain, and granting the fact that we cannot verify the properties that enabled the conscious connection between the two brains, does not mean that the properties do not exist.
It seems to me that this can’t be verified by any experiment, and thus must be cut off by the Newton’s Flaming Laser Sword.
Perhaps we presently have no way of testing whether some matter is conscious or not. This is not equivalent to saying that, in principle, the conscious state of some matter cannot be tested. We may one day make progress toward the hard problem of consciousness and be able to perform these experiments. Imagine making this argument throughout history before microscopes, telescopes, and hadron colliders. We can now sheath Newton’s Flaming Laser Sword.
I can’t say the same about any introspection-based observations that can’t be experimentally verified.
I believe this hedges on an epistemic question about whether we can have have knowledge of anything using our observations alone. I think even a skeptic would say that she has consciousness, as the fact that one is conscious may be the only thing that one can know with certainty about themself. You don’t need to verify any specific introspective observation. The act of introspection itself should be enough for someone to verify that they are conscious.
The human brain is a notoriously unreliable computing device which is known to produce many falsehoods about the world and (especially!) about itself.
This claim refers to the reliability of the human brain to verify the truth value of certain propositions or indentify specific and individuable experiences. Knowing whether oneself is conscious is not strictly a matter of verifying a proposition, nor identifying an individuable experience. It’s only about verifying whether one has any experience whatsoever, which should be possible. Whether I believe your claim to consciousness or not is a different problem.
The lack of progress here may be a quite good thing.
Did I miss some subtle cultural changes at LW?
I know the founding principles of LW are rationalism and AI safety from the start. But in my mind, LW always has all types of adjacent topics and conversations, with many different perspectives. Or at least this was my impression of the 2010s LW threads on the Singularity and transhumanism. Did these discussions become more and more emphasized on AI safety and derisking over time?
I’m not a regular reader of LW, any explanation would be greatly appreciated.
While I can understand why many would view advances toward WBE as an AI-safety risk, many in the community are also concerned with cryonics. WBE is an important option for the revival of cryonics patients. So I think the desirability of WBE should be clear. It just may be the case that we need to develop safe AI first.
While discussing a new powerful tech, people often focus on what could go horribly wrong, forgetting to consider what could go gloriously right.
What could could go gloriously right with mind uploading? It could eliminate involuntary death, saving trillions of future lives. This consequence alone massively outweighs the corresponding X- and S-risks.
At least from the orthodox QALY perspective on “weighing lives”, the benefits of WBE don’t outweigh the S-risks, because for any given number of lives, the resources required to make them all suffer are lesser than the resources required for the glorious version.
The benefits of eventually developing WBE do outweigh the X-risks, if we assume that
human lives are the only ones that count,
WBE’d humans still count as humans, and
WBE is much more resource-efficient than anything else that future society could do to support human life.
However, orthodox QALY reasoning of this kind can’t justify developing WBE soon (rather than, say, after a Long Reflection), unless there are really good stories about how to avoid both the X-risks and the S-risks.
As far as I know, mind uploading is the only tech that can reduce the risk of death (from all causes) to almost zero. It is almost impossible to destroy a mind that is running on resilient distributed hardware with tons of backups hidden in several star systems.
There is a popular idea that some very large amount of suffering is worse than death. I don’t subscribe to it. If I’m tortured for X billions of years, and then my mind is repaired, then this fate is still much better than permanent death. There is simply nothing worse than permanent death—because (by definition) it cannot be repaired. And everything else can be repaired, including the damage from any amount of suffering.
In such calculations, I would consider 1 human permadeath equal to at least 1 human life that is experiencing the worst possible suffering until the end of the universe.
I also don’t see how eliminating any arbitrary large amount of suffering could be preferable to saving 1 life. Unless the suffering leads to permadeath, the sufferers can get over it. The dead—cannot. Bad feelings are vastly less important than saved lives.
It’s a good idea to reduce suffering. But the S-risk is trivially eliminated from the equitation if the tech in question is life-saving.
orthodox QALY reasoning of this kind can’t justify developing WBE soon (rather than, say, after a Long Reflection)
There are currently ~45 million permadeaths per year. Thus, any additional year without widely accessible mind uploading means there is an equivalent of 45+ million more humans experiencing the worst possible suffering until the end of the universe. In 10 years, it’s a half a billion. In 1000 years, it’s a half a trillion. This high cost of the Long Reflection is one more reason why it should never be forced upon humanity.
I would consider 1 human permadeath equal to at least 1 human life that is experiencing the worst possible suffering until the end of the universe.
This is so incredibly far from where I would place the equivalence, and I think where almost anyone would place it, that I’m baffled. You really mean this?
There is an ancient and (unfortunately) still very popular association between death and sleep / rest / peace / tranquility.
The association is so deeply engraved, it is routinely used by most people who have to speak about death. E.g. “rest in peace”, “put to sleep”, “he is in a better place now” etc.
The association is harmful.
The association suggests that death could be a valid solution to pain, which is deeply wrong.
It’s the same wrongness as suggesting to kill a child to make the child less sad.
Technically, the child will not experience sadness anymore. But infanticide is not a sane person’s solution to sadness.
The sane solution is to find a way to make the child less sad (without killing them!).
The sane solution to suffering is to reduce suffering. Without killing the sufferer.
For example, if a cancer patient is in great pain, the most ethical solution is to cure them from cancer, and use efficient painkillers during the process. If there is no cure, then utilize cryonics to transport them into the future where such a cure becomes available. Killing the patient because they’re in pain is a sub-optimal solution (to put it mildly).
I can’t imagine any situation where permadeath is preferable to suffering. With enough tech and time, all kinds of suffering can be eliminated, and their effects can be reversed. But permadeath is, by definition, non-reversible and non-repairable.
If one must choose between a permanent loss of human life and some temporary discomfort, it doesn’t make sense to prefer the permanent loss of life, regardless of the intensity of the discomfort.
(I agree wholeheartedly with almost everything you’ve said here, and have strong upvoted, but I want to make space for the fact that some people don’t make sense, and some people reflectively endorse not making sense, and so while I will argue against their preference for death over discomfort, I will also fight for their right to make the wrong choice for themselves, just as I fight for your and my right to make the correct choice for ourselves. Unless there is freedom for people to make wrong choices, we can never move beyond a socially-endorsed “right” choice to something Actually Better.)
The thing is: regardless of how bad is the worst possible discomfort, dying is still a rather stupid idea, even if you have to endure the discomfort for millions of years. Because if you live long enough, you can find a way to fix any discomfort.
There is a popular idea that some very large amount of suffering is worse than death. I don’t subscribe to it. If I’m tortured for X billions of years, and then my mind is repaired, then this fate is still much better than permanent death. There is simply nothing worse than permanent death—because (by definition) it cannot be repaired.
This sweeps a large amount of philosphical issues under the rug by begging the conclusion (that death is the worst thing), and then using that justify itself (death is the worst thing, and if you die, you’re stuck dead, so that’s the worst thing).
I predict that most (all?) ethical theories that assume that some amount of suffering is worse than death—have internal inconsistencies.
My prediction is based on the following assumption:
permanent death is the only brain state that can’t be reversed, given sufficient tech and time
The non-reversibility is the key.
For example, if your goal is to maximize happiness of every human, you can achieve more happiness if none of the humans ever die, even if some humans will have periods of intense and prolonged suffering. Because you can increase happiness of the humans who suffered, but you can’t increase happiness of the humans who are non-reversibly dead.
If your goal is to minimize suffering (without killing people), then you should avoid killing people. Killing people includes withholding life extension technologies (like mind uploading), even if radical life extension will cause some people to suffer for millions of years. You can decrease suffering of the humans who are suffering, but you can’t do that for the humans who are non-reversibly dead.
The mere existence of the option of voluntary immortality necessitates some quite interesting changes in ethical theories.
Personally, I simply don’t want to die, regardless of the circumstances. The circumstances might include any arbitrary large amount of suffering. If a future-me ever begs for death, consider him in the need of some brain repair, not in the need of death.
While I (a year late) tentatively agree with you (though a million years of suffering is a hard thing to swallow compared to the instinctually almost mundane matter of death) I think there’s an assumption in your argument that bears inspection. Namely, I believe you are maximizing happiness at a given instance in time—the present, or the limit as time approaches infinity, etc. (Or, perhaps, you are predicating the calculations on the possibility of escaping the heat death of the universe, and being truly immortal for eternity.) A (possibly) alternate optimization goal—maximize human happiness, summed over time. See, I was thinking, the other day, and it seems possible we may never evade the heat death of the universe. In such a case, if you only value the final state, nothing we do matters, whether we suffer or go extinct tomorrow. At the very least, this metric is not helpful, because it cannot distinguish between any two states. So a different metric must be chosen. A reasonable substitute seems to me to be to effectively take the integral of human happiness over time, sum it up. The happy week you had last week is not canceled out by a mildly depressing day today, for instance—it still counts. Conversely, suffering for a long time may not be automatically balanced out the moment you stop suffering (though I’ll grant this goes a little against my instincts). If you DO assume infinite time, though, your argument may return to being automatically true. I’m not sure that’s an assumption that should be confidently made, though. If you don’t assume infinite time, I think it matters again what precise value you put on death, vs incredible suffering, and that may simply be a matter of opinion, of precise differences in two people’s terminal goals.
(Side note: I’ve idly speculated about expanding the above optimization criteria for the case of all-possible-universes—I forget the exact train of thought, but it ended up more or less behaving in a manner such that you optimize the probability-weighted ratio of good outcomes to bad outcomes (summed across time, I guess). Needs more thought to become more rigorous etc.)
Our current understanding of physics (and of our future capabilities) is so limited, I assume that our predictions on how the universe will behave trillions years from now are worthless.
I think we can safely postpone the entire question to the times after we achieve a decent understanding of physics, after we became much smarter, and after we can allow ourselves to invest some thousands of years of deep thought on the topic.
The tone of strong desirability for progress on WBE in this was surprising to me. The author seems to treat progress in WBE as a highly desirable thing; a perspective I expect most on LW do not endorse.
The lack of progress here may be a quite good thing.
As many other people here, I strongly desire to avoid death. Mind uploading is an efficient way to prevent many causes of death, as it is could make a mind practically indestructible (thanks to backups, distributed computing etc). WBE is a path towards mind uploading, and thus is desirable too.
Mind uploading could help mitigate OR increase the AI X-risk, depending on circumstances and implementation details. And the benefits of uploading as a mitigation tool seem to greatly outweigh the risks.
The most preferable future for me is the future there mind uploading is ubiquitous, while X-risk is avoided.
Although unlikely, it is still possible that mind uploading will emerge sooner than AGI. Such a future is much more desirable than the future without mind uploading (some possible scenarios).
This really depends on whether you believe a mind-upload retains the same conscious agent from the original brain. If it did, we would need to solve the hard problem of consciousness, which seems significantly harder than just WBE. The delay between solving WBE and the hard problem of consciousness is so vast in my opinion that being excited for mind-uploading when WBE progress is made is like being excited for self-propelled cars after making progress in developing horse-drawn wagons. In both cases, little progress has been made on the most significant component of the desired thing.
Doesn’t WBE involve the easy rather than hard problem of consciousness? You don’t need to solve why anything is conscious in the first place, because you can just take it as a given that human brains are conscious and re-implement the computational and biological mechanisms that are relevant for their consciousness.
I’m pretty sure the problem with this is that we don’t know what it is about the human brain that gives rise to consciousness, and therefore we don’t know whether we are actually emulating the consciousness-generating thing when we do WBE. Human conscious experience could be the biological computation of neurons + X. We might be able to emulate biological computation perfectly, but if X is necessary for conscious experience then we’ve just created a philosophical zombie. To find out whether our emulation is sufficient to produce consciousness, we would need to find out what X is and how to emulate it. I’m pretty sure this is exactly the hard problem of consciousness.
Even if biological computation is sufficient for generating consciousness, we will have no way of knowing until we solve the hard problem of consciousness.
David Chalmers had a pretty convincing (to me) argument for why it feels very implausible that an upload with identical behavior and functional organization to the biological brain wouldn’t be conscious (the central argument starts from the subheading “3 Fading Qualia”): http://consc.net/papers/qualia.html
What a great read! I suppose I’m not convinced that Fading Qualia is an empirical impossibility, and therefore that there exists a moment of Suddenly Disappearing Qualia when the last neuron is replaced with a silicon chip. If consciousness is quantized (just like other things in the universe), then there is nothing wrong in principle with Suddenly Disappearing Qualia when a single quantum of qualia is removed from a system with no other qualia, just like removing the last photon from a vacuum.
Joe is an interesting character which Chalmers thinks is implausible, but aside from it rubbing up against a faint intuition, I have no reason to believe that Joe is experiencing Fading Qualia. There is no indication for any reason that the workings of consciousness should obey any intuitions we may have about it.
The mind is an evolved system out to do stuff efficiently, not just a completely inscrutable object of philosophical analysis. It’s likelier that the parts like sensible cognition and qualia and the subjective feeling of consciousness are coupled and need each other to work than that they were somehow intrinsically disconnected and cognition could go on as usual without subjective consciousness using anything close to the same architecture. If that were the case, we’d have the additional questions of how consciousness evolved to be a part of the system to begin with and why hasn’t it evolved out of living biological humans.
I agree with you, though I personally wouldn’t classify this as purely an intuition since it is informed by reasoning which itself was gathered from scientific knowledge about the world. Chalmers doesn’t think that Joe could exist because it doesn’t seem right to him. You believe your statement because you know some scientific truths about how things in our world come to be (i.e. natural selection) and use this knowledge to reason about other things that exist in the world (consciousness), not merely because the assertion seems right to you.
The brain is changing over time. It is likely that there is not a single atom in your 2021-brain that was present in your 2011-brain.
If you agree that the natural replacements haven’t killed you (2011-you and 2021-you are the same conscious agent), then it’s possible to transfer your mind to a machine in a similar manner. Because you’ve already survived a mind uploading into a new brain.
Gradual mind uploading (e.g. by gradually replacing neurons with emulated replicas) circumvents the philosophical problems attributed to non-gradual methods.
Personally, although I prefer gradual uploading, I would agree to a non-gradual method too, as I don’t see the philosophical problems as important. As per the Newton’s Flaming Laser Sword:
if a question, even in principle, can’t be resolved by an experiment, then it is not worth considering.
If a machine behaves like me, it is me. Whatever we share some unmeasurable sameness—is of no importance for me.
The brain is but a computing device. You give it inputs, and it returns outputs. There is nothing beyond that. For all practical purposes, if two devices have the same inputs→outputs mapping, you can replace one of them with another.
As Dennett put it, everyone is a philosophical zombie.
There are a lot of interesting points here, but I disagree (or am hesitant to agree) with most of them.
Of course, I’m not disputing whether mind-uploading is theoretically possible. It seems likely that it is, although it will probably be extremely complex. There’s something to be said about the substrate independence of computation and, separately, consciousness. No, my brain today does not contain the same atoms as my brain from ten years ago. However, certain properties of the atoms (including the states of their constituent parts) may be conserved such as spin, charge, entanglement, or some yet undiscovered state of matter. So long as we are unaware of the constraints on these properties that are necessary for consciousness (or even whether these properties are relevant to consciousness), we cannot know with certainty that we have uploaded a conscious mind.
These statements are ringing some loud alarm bells for me. It seems that you are rejecting consciousness itself. I suppose you could do that, but I don’t think any reasonable person would agree with you. To truly gauge whether you believe you are conscious or not, ask yourself, “have I ever experienced pain?” If you believe the answer to that is “yes,” then at least you should be convinced that you are conscious.
What you are suggesting at the end there is that WBE = mind uploading. I’m not sure many people would agree with that assertion.
Can we know with certainty that the same properties were preserved between 2011-brain and 2021-brain?
It seems to me that this can’t be verified by any experiment, and thus must be cut off by the Newton’s Flaming Laser Sword.
As far as I know, it is impossible to experimentally verify if some entity posses consciousness (partly because how fuzzy are its definitions). It is a strong indicator that consciousness is one of those abstractions that don’t correspond to any real phenomenon.
If certain kinds of damage are inflicted upon my body, my brain generates an output typical for a human in pain. The reaction can be experimentally verified. It also has a reasonable biological explanation, and a clear mechanism of functioning. Thus, I have no doubt that pain does exist, and I’ve experienced it.
I can’t say the same about any introspection-based observations that can’t be experimentally verified. The human brain is a notoriously unreliable computing device which is known to produce many falsehoods about the world and (especially!) about itself.
No, we cannot. Just as we cannot know with certainty whether a mind-upload is conscious. Just because we presume that our 2021 brain is a related conscious agent to our 2011 brain, and granting the fact that we cannot verify the properties that enabled the conscious connection between the two brains, does not mean that the properties do not exist.
Perhaps we presently have no way of testing whether some matter is conscious or not. This is not equivalent to saying that, in principle, the conscious state of some matter cannot be tested. We may one day make progress toward the hard problem of consciousness and be able to perform these experiments. Imagine making this argument throughout history before microscopes, telescopes, and hadron colliders. We can now sheath Newton’s Flaming Laser Sword.
I believe this hedges on an epistemic question about whether we can have have knowledge of anything using our observations alone. I think even a skeptic would say that she has consciousness, as the fact that one is conscious may be the only thing that one can know with certainty about themself. You don’t need to verify any specific introspective observation. The act of introspection itself should be enough for someone to verify that they are conscious.
This claim refers to the reliability of the human brain to verify the truth value of certain propositions or indentify specific and individuable experiences. Knowing whether oneself is conscious is not strictly a matter of verifying a proposition, nor identifying an individuable experience. It’s only about verifying whether one has any experience whatsoever, which should be possible. Whether I believe your claim to consciousness or not is a different problem.
Did I miss some subtle cultural changes at LW?
I know the founding principles of LW are rationalism and AI safety from the start. But in my mind, LW always has all types of adjacent topics and conversations, with many different perspectives. Or at least this was my impression of the 2010s LW threads on the Singularity and transhumanism. Did these discussions become more and more emphasized on AI safety and derisking over time?
I’m not a regular reader of LW, any explanation would be greatly appreciated.
While I can understand why many would view advances toward WBE as an AI-safety risk, many in the community are also concerned with cryonics. WBE is an important option for the revival of cryonics patients. So I think the desirability of WBE should be clear. It just may be the case that we need to develop safe AI first.
It’s not just an AI safety risk, it’s also an S-risk in it’s own right.
While discussing a new powerful tech, people often focus on what could go horribly wrong, forgetting to consider what could go gloriously right.
What could could go gloriously right with mind uploading? It could eliminate involuntary death, saving trillions of future lives. This consequence alone massively outweighs the corresponding X- and S-risks.
At least from the orthodox QALY perspective on “weighing lives”, the benefits of WBE don’t outweigh the S-risks, because for any given number of lives, the resources required to make them all suffer are lesser than the resources required for the glorious version.
The benefits of eventually developing WBE do outweigh the X-risks, if we assume that
human lives are the only ones that count,
WBE’d humans still count as humans, and
WBE is much more resource-efficient than anything else that future society could do to support human life.
However, orthodox QALY reasoning of this kind can’t justify developing WBE soon (rather than, say, after a Long Reflection), unless there are really good stories about how to avoid both the X-risks and the S-risks.
As far as I know, mind uploading is the only tech that can reduce the risk of death (from all causes) to almost zero. It is almost impossible to destroy a mind that is running on resilient distributed hardware with tons of backups hidden in several star systems.
There is a popular idea that some very large amount of suffering is worse than death. I don’t subscribe to it. If I’m tortured for X billions of years, and then my mind is repaired, then this fate is still much better than permanent death. There is simply nothing worse than permanent death—because (by definition) it cannot be repaired. And everything else can be repaired, including the damage from any amount of suffering.
In such calculations, I would consider 1 human permadeath equal to at least 1 human life that is experiencing the worst possible suffering until the end of the universe.
I also don’t see how eliminating any arbitrary large amount of suffering could be preferable to saving 1 life. Unless the suffering leads to permadeath, the sufferers can get over it. The dead—cannot. Bad feelings are vastly less important than saved lives.
It’s a good idea to reduce suffering. But the S-risk is trivially eliminated from the equitation if the tech in question is life-saving.
There are currently ~45 million permadeaths per year. Thus, any additional year without widely accessible mind uploading means there is an equivalent of 45+ million more humans experiencing the worst possible suffering until the end of the universe. In 10 years, it’s a half a billion. In 1000 years, it’s a half a trillion. This high cost of the Long Reflection is one more reason why it should never be forced upon humanity.
This is so incredibly far from where I would place the equivalence, and I think where almost anyone would place it, that I’m baffled. You really mean this?
There is an ancient and (unfortunately) still very popular association between death and sleep / rest / peace / tranquility.
The association is so deeply engraved, it is routinely used by most people who have to speak about death. E.g. “rest in peace”, “put to sleep”, “he is in a better place now” etc.
The association is harmful.
The association suggests that death could be a valid solution to pain, which is deeply wrong.
It’s the same wrongness as suggesting to kill a child to make the child less sad.
Technically, the child will not experience sadness anymore. But infanticide is not a sane person’s solution to sadness.
The sane solution is to find a way to make the child less sad (without killing them!).
The sane solution to suffering is to reduce suffering. Without killing the sufferer.
For example, if a cancer patient is in great pain, the most ethical solution is to cure them from cancer, and use efficient painkillers during the process. If there is no cure, then utilize cryonics to transport them into the future where such a cure becomes available. Killing the patient because they’re in pain is a sub-optimal solution (to put it mildly).
I can’t imagine any situation where permadeath is preferable to suffering. With enough tech and time, all kinds of suffering can be eliminated, and their effects can be reversed. But permadeath is, by definition, non-reversible and non-repairable.
If one must choose between a permanent loss of human life and some temporary discomfort, it doesn’t make sense to prefer the permanent loss of life, regardless of the intensity of the discomfort.
(I agree wholeheartedly with almost everything you’ve said here, and have strong upvoted, but I want to make space for the fact that some people don’t make sense, and some people reflectively endorse not making sense, and so while I will argue against their preference for death over discomfort, I will also fight for their right to make the wrong choice for themselves, just as I fight for your and my right to make the correct choice for ourselves. Unless there is freedom for people to make wrong choices, we can never move beyond a socially-endorsed “right” choice to something Actually Better.)
Something is handicapping your ability to imagine what the “worst possible discomfort” would be.
The thing is: regardless of how bad is the worst possible discomfort, dying is still a rather stupid idea, even if you have to endure the discomfort for millions of years. Because if you live long enough, you can find a way to fix any discomfort.
I wrote in more detail about it here.
This sweeps a large amount of philosphical issues under the rug by begging the conclusion (that death is the worst thing), and then using that justify itself (death is the worst thing, and if you die, you’re stuck dead, so that’s the worst thing).
I predict that most (all?) ethical theories that assume that some amount of suffering is worse than death—have internal inconsistencies.
My prediction is based on the following assumption:
permanent death is the only brain state that can’t be reversed, given sufficient tech and time
The non-reversibility is the key.
For example, if your goal is to maximize happiness of every human, you can achieve more happiness if none of the humans ever die, even if some humans will have periods of intense and prolonged suffering. Because you can increase happiness of the humans who suffered, but you can’t increase happiness of the humans who are non-reversibly dead.
If your goal is to minimize suffering (without killing people), then you should avoid killing people. Killing people includes withholding life extension technologies (like mind uploading), even if radical life extension will cause some people to suffer for millions of years. You can decrease suffering of the humans who are suffering, but you can’t do that for the humans who are non-reversibly dead.
The mere existence of the option of voluntary immortality necessitates some quite interesting changes in ethical theories.
Personally, I simply don’t want to die, regardless of the circumstances. The circumstances might include any arbitrary large amount of suffering. If a future-me ever begs for death, consider him in the need of some brain repair, not in the need of death.
While I (a year late) tentatively agree with you (though a million years of suffering is a hard thing to swallow compared to the instinctually almost mundane matter of death) I think there’s an assumption in your argument that bears inspection. Namely, I believe you are maximizing happiness at a given instance in time—the present, or the limit as time approaches infinity, etc. (Or, perhaps, you are predicating the calculations on the possibility of escaping the heat death of the universe, and being truly immortal for eternity.) A (possibly) alternate optimization goal—maximize human happiness, summed over time. See, I was thinking, the other day, and it seems possible we may never evade the heat death of the universe. In such a case, if you only value the final state, nothing we do matters, whether we suffer or go extinct tomorrow. At the very least, this metric is not helpful, because it cannot distinguish between any two states. So a different metric must be chosen. A reasonable substitute seems to me to be to effectively take the integral of human happiness over time, sum it up. The happy week you had last week is not canceled out by a mildly depressing day today, for instance—it still counts. Conversely, suffering for a long time may not be automatically balanced out the moment you stop suffering (though I’ll grant this goes a little against my instincts). If you DO assume infinite time, though, your argument may return to being automatically true. I’m not sure that’s an assumption that should be confidently made, though. If you don’t assume infinite time, I think it matters again what precise value you put on death, vs incredible suffering, and that may simply be a matter of opinion, of precise differences in two people’s terminal goals.
(Side note: I’ve idly speculated about expanding the above optimization criteria for the case of all-possible-universes—I forget the exact train of thought, but it ended up more or less behaving in a manner such that you optimize the probability-weighted ratio of good outcomes to bad outcomes (summed across time, I guess). Needs more thought to become more rigorous etc.)
Our current understanding of physics (and of our future capabilities) is so limited, I assume that our predictions on how the universe will behave trillions years from now are worthless.
I think we can safely postpone the entire question to the times after we achieve a decent understanding of physics, after we became much smarter, and after we can allow ourselves to invest some thousands of years of deep thought on the topic.