Such is the hideously unfair world we live in, which I do hope to fix.
Speaking as a non-sparkly, not-very-alive person, who will probably never get to talk to anyone who is particularly sparkly or alive except as a disembodied sequence of characters on a rationalist blog, I have to wonder whether it’s more efficient to “fix” the less-sparkly and less-alive of us via uplift, or simply by recognizing that we’re made of atoms that could be put to better use.
I have to wonder whether it’s more efficient to “fix” the less-sparkly and less-alive of us via uplift, or simply by recognizing that we’re made of atoms that could be put to better use.
The former is a special case of the latter (assuming by “better” you mean ‘better than the status quo’ rather than ‘better than fixing us via uplift’).
Granted. But it’s a special case with what some people consider a very important distinction—a conscious awareness is preserved instead of obliterated. Personal example:
In general, sharing my thoughts anywhere results in the local equivalent of downvoting. This has taught me two habits:
Constantly asking others whether my thoughts are appropriate (and various resulting meta-questions)
and
Constantly double-checking myself to see whether my thoughts are worth having.
Since, statistically, they AREN’T appropriate or worth having, it seems that my brain is simply a device for converting glucose into entropy. So why not shut it off and recycle it into something with actually useful output?
This argument can be extended to include many, many other people. Certainly, instinctive human morality generates a desire to preserve other human beings, but certainly not ALL of them. So why preserve the ones that aren’t going to substantially improve the preserver’s life?
So why preserve the ones that aren’t going to substantially improve the preserver’s life?
In the human case of human- or human-equivalent preservers, the risks of incomplete information or corrupted thinking make any such evaluation—or even self-evaluation—exceptionally risky, for fairly minimal payoff. Atoms are cheap; human neurology is expensive. Presumably, any remotely friendly optimizing program will similarly want to preserve other minds : this is generally a tautology. Paving over the universe with smiling pictures of you is one of the go-to doomsday scenarios in this community.
At a practical level, the difference between modern human minds can not possibly be that large. There isn’t that much variation in humanity : humans are not only very nearly clones, you are inbred clones. The difference between Einstein and an average person reside entirely in the patterns within a kilogram of fatty meat, and likely in less than a tenth or hundredth or thousandth of that material. The difference between Einstein and someone else’s component atoms involves a vast deal more entropy.
((There’s a deeper question of whether it’s that different to you as yourself whether we uplift you or recycle your atoms, but that has to do with matters of identity, continuity of experience, and whether you reject any form of metaphysical dualism, similar to the Transporter Problem.))
At a simpler level, you’d have to be less sympathetic to the Working Man than even Ayn Rand characters in Atlas Shrugged, which while not a strict test, still strikes me as a meaningful one.
I don’t know that I’m qualified to make that call. I think I’d rather defer to people with more and better-optimized processing capacity. But… it often seems that everything worth preserving about humans, is really only worth preserving in some humans, and the rest of us are really just redundant expressions of the same traits.
Very well. In that case, I’d like to note that “confident that there are no better uses to which their atoms could be put” seems, to me, to be a very non-Bayesian way of looking at things.
I’d rather say that I know many people for whom I have weak prior weight (0.55 − 0.7) towards the idea that they would be better off recycled, and I know a smaller number of people for whom I have reasonably strong prior weight (0.8 − 0.9) that they have contributed causally to changes in the local universe that I consider positive, and that any other configuration of atoms that might have contributed to an equally positive change would take longer to search for in configuration space than simply letting them continue to exist as-is.
I adopted the “no better uses” formulation because you initially seemed to be contrasting them with the less-sparkly and less-alive of us who you seemed confident are made of atoms that can be put to better use, and I was trying to stay consistent with that usage, I’m not committed to it.
So, rephrasing my question in the terms you use here: How many individuals are you aware of for whom you have a reasonably strong prior weight that any other configuration of atoms that will contribute to equally positive changes in the future as their current configuration would take longer to search for in configuration space than simply letting them continue to exist as-is?
So, rephrasing my question in the terms you use here: How many individuals are you aware of for whom you have a reasonably strong prior weight that any other configuration of atoms that will contribute to equally positive changes in the future as their current configuration would take longer to search for in configuration space than simply letting them continue to exist as-is?
Heh. MAN, English sucks for this.
I’d say a few hundred that I’m directly aware of (either through direct acquaintance or media awareness); given my sample sizes and some back-of-the-envelope math, I can extrapolate that out globally to “a few million people”.
Speaking as a non-sparkly, not-very-alive person, who will probably never get to talk to anyone who is particularly sparkly or alive except as a disembodied sequence of characters on a rationalist blog, I have to wonder whether it’s more efficient to “fix” the less-sparkly and less-alive of us via uplift, or simply by recognizing that we’re made of atoms that could be put to better use.
The former is a special case of the latter (assuming by “better” you mean ‘better than the status quo’ rather than ‘better than fixing us via uplift’).
Granted. But it’s a special case with what some people consider a very important distinction—a conscious awareness is preserved instead of obliterated. Personal example:
In general, sharing my thoughts anywhere results in the local equivalent of downvoting. This has taught me two habits:
Constantly asking others whether my thoughts are appropriate (and various resulting meta-questions)
and
Constantly double-checking myself to see whether my thoughts are worth having.
Since, statistically, they AREN’T appropriate or worth having, it seems that my brain is simply a device for converting glucose into entropy. So why not shut it off and recycle it into something with actually useful output?
This argument can be extended to include many, many other people. Certainly, instinctive human morality generates a desire to preserve other human beings, but certainly not ALL of them. So why preserve the ones that aren’t going to substantially improve the preserver’s life?
In the human case of human- or human-equivalent preservers, the risks of incomplete information or corrupted thinking make any such evaluation—or even self-evaluation—exceptionally risky, for fairly minimal payoff. Atoms are cheap; human neurology is expensive. Presumably, any remotely friendly optimizing program will similarly want to preserve other minds : this is generally a tautology. Paving over the universe with smiling pictures of you is one of the go-to doomsday scenarios in this community.
At a practical level, the difference between modern human minds can not possibly be that large. There isn’t that much variation in humanity : humans are not only very nearly clones, you are inbred clones. The difference between Einstein and an average person reside entirely in the patterns within a kilogram of fatty meat, and likely in less than a tenth or hundredth or thousandth of that material. The difference between Einstein and someone else’s component atoms involves a vast deal more entropy.
((There’s a deeper question of whether it’s that different to you as yourself whether we uplift you or recycle your atoms, but that has to do with matters of identity, continuity of experience, and whether you reject any form of metaphysical dualism, similar to the Transporter Problem.))
At a simpler level, you’d have to be less sympathetic to the Working Man than even Ayn Rand characters in Atlas Shrugged, which while not a strict test, still strikes me as a meaningful one.
Whose utility function you aim to satisfy with this?
Hw many individuals are you aware of about whom you are confident that there are no better uses to which their atoms could be put?
I don’t know that I’m qualified to make that call. I think I’d rather defer to people with more and better-optimized processing capacity. But… it often seems that everything worth preserving about humans, is really only worth preserving in some humans, and the rest of us are really just redundant expressions of the same traits.
I assure you, I don’t intend to implement a culling strategy based on your answer. I’m just curious about your answer.
Very well. In that case, I’d like to note that “confident that there are no better uses to which their atoms could be put” seems, to me, to be a very non-Bayesian way of looking at things.
I’d rather say that I know many people for whom I have weak prior weight (0.55 − 0.7) towards the idea that they would be better off recycled, and I know a smaller number of people for whom I have reasonably strong prior weight (0.8 − 0.9) that they have contributed causally to changes in the local universe that I consider positive, and that any other configuration of atoms that might have contributed to an equally positive change would take longer to search for in configuration space than simply letting them continue to exist as-is.
That’s fine.
I adopted the “no better uses” formulation because you initially seemed to be contrasting them with the less-sparkly and less-alive of us who you seemed confident are made of atoms that can be put to better use, and I was trying to stay consistent with that usage, I’m not committed to it.
So, rephrasing my question in the terms you use here: How many individuals are you aware of for whom you have a reasonably strong prior weight that any other configuration of atoms that will contribute to equally positive changes in the future as their current configuration would take longer to search for in configuration space than simply letting them continue to exist as-is?
Heh. MAN, English sucks for this.
I’d say a few hundred that I’m directly aware of (either through direct acquaintance or media awareness); given my sample sizes and some back-of-the-envelope math, I can extrapolate that out globally to “a few million people”.
(nods) kk, thanks