Voted you down. This is deontologist thought in transhumanist wrapping paper.
Ignoring the debate concerning the merits of eternal paradise itself and the question of Heaven’s existence, I would like to question the assumption that every soul is worth preserving for posterity.
Consider those who have demonstrated through their actions that they are best kept excluded from society at large. John Wayne Gacy and Jeffrey Dahmer would be prime examples. Many people write these villains off as evil and give their condition not a second thought. But it is quite possible that they actually suffer from some sort of Satanic corruption and are thus not fully responsible for their crimes. In fact, there is evidence that the souls of serial killers are measurably different from those of normal people. Far enough in the future, it might be possible to “cure” them. However, they will still possess toxic memories and thoughts that would greatly distress them now that they are normal. To truly save them, they would likely need to have many or all of their memories erased. At that point, with an amnesic brain and a cloned body, are they even really the same person, and if not, what was the point of saving them?
Forming a robust theory of mind and realizing that not everyone thinks or sees the world the same way you do is actually quite difficult. Consider the immense complexity of the world we live in and the staggering scope of thoughts that can possibly be thought as a result. If eternal salvation means first and foremost soul preservation, maybe there are some souls that just shouldn’t be saved. Maybe Heaven would be a better, happier place without certain thoughts, feelings and memories—and without the minds that harbor them.
Why should that be an interesting question? (What’s “transhumanism”, again?) What matters is whether this allows you to find correct decisions, perhaps whether it’s a useful sense of “correct” to rely on when you have something to protect.
Voted you down. This is deontologist thought in transhumanist wrapping paper.
Sure sounds like consequentialism to me.
Is consequentialism an essential part of transhumanism?
No.
Why should that be an interesting question? (What’s “transhumanism”, again?) What matters is whether this allows you to find correct decisions, perhaps whether it’s a useful sense of “correct” to rely on when you have something to protect.
It seemed relevant to the parent’s objection to the original article.