This seems like it falls face-first, hands-tied-behind-back right in the giant pit of the Repugnant Conclusion and all of its corollaries, including sentience and intelligence and ability-to-enjoy and ability-to-value.
For instance, if I’m a life-maximizer and I don’t care about whether the life I create even has the ability to care about anything, and just lives, but has no values or desires or anything even remotely like what humans think of (whatever they do think of) when they think about “values” or “utility”… does that still make me more altruistically ideal and worthy of destroying all humanity?
What about intelligence? If the universe is filled to the planck with life, but not a single being is intelligent enough to even do anything more than be, is that simply not an issue? What about consciousness?
And, as so troubling in the repugnant conclusion, what if the number of lives is inversely proportional to the maximum quality of each?
The point of the reference to paperclip-maximisers was that these values are just as alien to me as those of the paperclip-maximiser. “Putting up a fight against nature’s descent from order to chaos” is a bizarre terminal value.
Consciousness certainly is something it is possible to care about, and caring itself may be important. Some theories of consciousness imply a kind of panpsychism or panexperiantialism, though.
I am not exactly talking about maximizing the number of lives, but on maximizing the utilization of free energy for the maximization of the utilization of energy (not for anything else)… I think.
This seems like it falls face-first, hands-tied-behind-back right in the giant pit of the Repugnant Conclusion and all of its corollaries, including sentience and intelligence and ability-to-enjoy and ability-to-value.
For instance, if I’m a life-maximizer and I don’t care about whether the life I create even has the ability to care about anything, and just lives, but has no values or desires or anything even remotely like what humans think of (whatever they do think of) when they think about “values” or “utility”… does that still make me more altruistically ideal and worthy of destroying all humanity?
What about intelligence? If the universe is filled to the planck with life, but not a single being is intelligent enough to even do anything more than be, is that simply not an issue? What about consciousness?
And, as so troubling in the repugnant conclusion, what if the number of lives is inversely proportional to the maximum quality of each?
The point of the reference to paperclip-maximisers was that these values are just as alien to me as those of the paperclip-maximiser. “Putting up a fight against nature’s descent from order to chaos” is a bizarre terminal value.
Consciousness certainly is something it is possible to care about, and caring itself may be important. Some theories of consciousness imply a kind of panpsychism or panexperiantialism, though.
I am not exactly talking about maximizing the number of lives, but on maximizing the utilization of free energy for the maximization of the utilization of energy (not for anything else)… I think.