This seems like it falls face-first, hands-tied-behind-back right in the giant pit of the Repugnant Conclusion and all of its corollaries, including sentience and intelligence and ability-to-enjoy and ability-to-value.
For instance, if I’m a life-maximizer and I don’t care about whether the life I create even has the ability to care about anything, and just lives, but has no values or desires or anything even remotely like what humans think of (whatever they do think of) when they think about “values” or “utility”… does that still make me more altruistically ideal and worthy of destroying all humanity?
What about intelligence? If the universe is filled to the planck with life, but not a single being is intelligent enough to even do anything more than be, is that simply not an issue? What about consciousness?
And, as so troubling in the repugnant conclusion, what if the number of lives is inversely proportional to the maximum quality of each?
The point of the reference to paperclip-maximisers was that these values are just as alien to me as those of the paperclip-maximiser. “Putting up a fight against nature’s descent from order to chaos” is a bizarre terminal value.
Consciousness certainly is something it is possible to care about, and caring itself may be important. Some theories of consciousness imply a kind of panpsychism or panexperiantialism, though.
I am not exactly talking about maximizing the number of lives, but on maximizing the utilization of free energy for the maximization of the utilization of energy (not for anything else)… I think.
Instead of paperclips, the life-maximizer would probably fill the universe with some simple thing that qualifies as life. Maybe it would be a bacteria-maximizer. Maybe some fractal-shaped bacteria or many different kinds of bacteria, depending on how exactly its goal will be specified.
Is this the best “a real altruist” can hope for?
If UFAI would be inevitable, I would hope for some approximation of a FAI. If no decent approximation is possible, I would wish some UFAI which is likely to destroy itself. Among different kinds of smart paperclip-maximizers… uhm, I guess I prefer red paperclips aesthetically, but that’s really not so important.
My legacy is important only if there is someone able to enjoy it.
This seems like it falls face-first, hands-tied-behind-back right in the giant pit of the Repugnant Conclusion and all of its corollaries, including sentience and intelligence and ability-to-enjoy and ability-to-value.
For instance, if I’m a life-maximizer and I don’t care about whether the life I create even has the ability to care about anything, and just lives, but has no values or desires or anything even remotely like what humans think of (whatever they do think of) when they think about “values” or “utility”… does that still make me more altruistically ideal and worthy of destroying all humanity?
What about intelligence? If the universe is filled to the planck with life, but not a single being is intelligent enough to even do anything more than be, is that simply not an issue? What about consciousness?
And, as so troubling in the repugnant conclusion, what if the number of lives is inversely proportional to the maximum quality of each?
The point of the reference to paperclip-maximisers was that these values are just as alien to me as those of the paperclip-maximiser. “Putting up a fight against nature’s descent from order to chaos” is a bizarre terminal value.
Consciousness certainly is something it is possible to care about, and caring itself may be important. Some theories of consciousness imply a kind of panpsychism or panexperiantialism, though.
I am not exactly talking about maximizing the number of lives, but on maximizing the utilization of free energy for the maximization of the utilization of energy (not for anything else)… I think.
Instead of paperclips, the life-maximizer would probably fill the universe with some simple thing that qualifies as life. Maybe it would be a bacteria-maximizer. Maybe some fractal-shaped bacteria or many different kinds of bacteria, depending on how exactly its goal will be specified.
Is this the best “a real altruist” can hope for?
If UFAI would be inevitable, I would hope for some approximation of a FAI. If no decent approximation is possible, I would wish some UFAI which is likely to destroy itself. Among different kinds of smart paperclip-maximizers… uhm, I guess I prefer red paperclips aesthetically, but that’s really not so important.
My legacy is important only if there is someone able to enjoy it.