First, with regards to the solution proposed by the superhappies, my thought would have been, right at the start, this:
Accept IF they can ensure the following: For us, the change away from pain doesn’t end up having indirect effects that, well, more or less screw up other aspects of our development. ie, one of the primary reasons why humanity might have been very cautious in the first place with regards to such changes.
With regards to the business of us changing to more resemble babyeaters, can they simultaneously ensure that the eaten children will not have, at any point, been conscious? And can they ensure that the delayed consciousness (not merely self awareness, but consciousness, period) doesn’t negatively impact, in other ways, human development?
Further, can they ensure that making us, well, babyeater like does NOT otherwise screw with our sympathy and compassion?
IF all of the above can be truly answered “yes”, then (in my view) the price that humanity would pay would not really be all that bad.
Of course, we have to then ask about the changes to the babyeaters? Presumably, the ideal would be something like “delay onset of consciousness until after the culling (and not at all, of course, for those that are eaten)”, but in such a way that intelligence and learning is still there, and when the babyeater becomes conscious, it can integrate data and experience acquired while it was not conscious.
But, a question arises, a possibly very important one: Should the Superhappies firing on the Babyeater ship be considered evidence that Superhappies are Prisoner’s Dilemma defectors?
If yes, then how much can we trust the Superhappies to actually implement the solution they proposed, rather than do something entirely different? And THAT consideration would be perhaps the only consideration (I can think of so far) for really considering the “blow up a star to close down the paths leading to humanity’s worlds” option (post Babyeater fix, perhaps))
First, with regards to the solution proposed by the superhappies, my thought would have been, right at the start, this:
Accept IF they can ensure the following: For us, the change away from pain doesn’t end up having indirect effects that, well, more or less screw up other aspects of our development. ie, one of the primary reasons why humanity might have been very cautious in the first place with regards to such changes.
With regards to the business of us changing to more resemble babyeaters, can they simultaneously ensure that the eaten children will not have, at any point, been conscious? And can they ensure that the delayed consciousness (not merely self awareness, but consciousness, period) doesn’t negatively impact, in other ways, human development?
Further, can they ensure that making us, well, babyeater like does NOT otherwise screw with our sympathy and compassion?
IF all of the above can be truly answered “yes”, then (in my view) the price that humanity would pay would not really be all that bad.
Of course, we have to then ask about the changes to the babyeaters? Presumably, the ideal would be something like “delay onset of consciousness until after the culling (and not at all, of course, for those that are eaten)”, but in such a way that intelligence and learning is still there, and when the babyeater becomes conscious, it can integrate data and experience acquired while it was not conscious.
But, a question arises, a possibly very important one: Should the Superhappies firing on the Babyeater ship be considered evidence that Superhappies are Prisoner’s Dilemma defectors?
If yes, then how much can we trust the Superhappies to actually implement the solution they proposed, rather than do something entirely different? And THAT consideration would be perhaps the only consideration (I can think of so far) for really considering the “blow up a star to close down the paths leading to humanity’s worlds” option (post Babyeater fix, perhaps))