I have to wonder whether it’s more efficient to “fix” the less-sparkly and less-alive of us via uplift, or simply by recognizing that we’re made of atoms that could be put to better use.
The former is a special case of the latter (assuming by “better” you mean ‘better than the status quo’ rather than ‘better than fixing us via uplift’).
Granted. But it’s a special case with what some people consider a very important distinction—a conscious awareness is preserved instead of obliterated. Personal example:
In general, sharing my thoughts anywhere results in the local equivalent of downvoting. This has taught me two habits:
Constantly asking others whether my thoughts are appropriate (and various resulting meta-questions)
and
Constantly double-checking myself to see whether my thoughts are worth having.
Since, statistically, they AREN’T appropriate or worth having, it seems that my brain is simply a device for converting glucose into entropy. So why not shut it off and recycle it into something with actually useful output?
This argument can be extended to include many, many other people. Certainly, instinctive human morality generates a desire to preserve other human beings, but certainly not ALL of them. So why preserve the ones that aren’t going to substantially improve the preserver’s life?
So why preserve the ones that aren’t going to substantially improve the preserver’s life?
In the human case of human- or human-equivalent preservers, the risks of incomplete information or corrupted thinking make any such evaluation—or even self-evaluation—exceptionally risky, for fairly minimal payoff. Atoms are cheap; human neurology is expensive. Presumably, any remotely friendly optimizing program will similarly want to preserve other minds : this is generally a tautology. Paving over the universe with smiling pictures of you is one of the go-to doomsday scenarios in this community.
At a practical level, the difference between modern human minds can not possibly be that large. There isn’t that much variation in humanity : humans are not only very nearly clones, you are inbred clones. The difference between Einstein and an average person reside entirely in the patterns within a kilogram of fatty meat, and likely in less than a tenth or hundredth or thousandth of that material. The difference between Einstein and someone else’s component atoms involves a vast deal more entropy.
((There’s a deeper question of whether it’s that different to you as yourself whether we uplift you or recycle your atoms, but that has to do with matters of identity, continuity of experience, and whether you reject any form of metaphysical dualism, similar to the Transporter Problem.))
At a simpler level, you’d have to be less sympathetic to the Working Man than even Ayn Rand characters in Atlas Shrugged, which while not a strict test, still strikes me as a meaningful one.
The former is a special case of the latter (assuming by “better” you mean ‘better than the status quo’ rather than ‘better than fixing us via uplift’).
Granted. But it’s a special case with what some people consider a very important distinction—a conscious awareness is preserved instead of obliterated. Personal example:
In general, sharing my thoughts anywhere results in the local equivalent of downvoting. This has taught me two habits:
Constantly asking others whether my thoughts are appropriate (and various resulting meta-questions)
and
Constantly double-checking myself to see whether my thoughts are worth having.
Since, statistically, they AREN’T appropriate or worth having, it seems that my brain is simply a device for converting glucose into entropy. So why not shut it off and recycle it into something with actually useful output?
This argument can be extended to include many, many other people. Certainly, instinctive human morality generates a desire to preserve other human beings, but certainly not ALL of them. So why preserve the ones that aren’t going to substantially improve the preserver’s life?
In the human case of human- or human-equivalent preservers, the risks of incomplete information or corrupted thinking make any such evaluation—or even self-evaluation—exceptionally risky, for fairly minimal payoff. Atoms are cheap; human neurology is expensive. Presumably, any remotely friendly optimizing program will similarly want to preserve other minds : this is generally a tautology. Paving over the universe with smiling pictures of you is one of the go-to doomsday scenarios in this community.
At a practical level, the difference between modern human minds can not possibly be that large. There isn’t that much variation in humanity : humans are not only very nearly clones, you are inbred clones. The difference between Einstein and an average person reside entirely in the patterns within a kilogram of fatty meat, and likely in less than a tenth or hundredth or thousandth of that material. The difference between Einstein and someone else’s component atoms involves a vast deal more entropy.
((There’s a deeper question of whether it’s that different to you as yourself whether we uplift you or recycle your atoms, but that has to do with matters of identity, continuity of experience, and whether you reject any form of metaphysical dualism, similar to the Transporter Problem.))
At a simpler level, you’d have to be less sympathetic to the Working Man than even Ayn Rand characters in Atlas Shrugged, which while not a strict test, still strikes me as a meaningful one.
Whose utility function you aim to satisfy with this?