Taking a thing as assumed and then dwelwing in the question of “why believe it?” seems contradictory.
You are implicitly assuming that the reason why we think humans are concious has something to do with evolution. The stepping is not explicit so hard to vet for errors.
You can deduce that “great capabilities” will tend to preserve forward but this can not be used for backwards chaining. “Which came first chicken or the egg?” can have the answer of “egg came first from a non-chicken” because “eggs come from chickens” is not unconditionally true. From the child being concious we can not neccesarily deduce that some of the parents would have had the property.
If you can trace why you believe that other humans are concious you can try to aim that process at animals and see whether there is a blip. If you can’t pinpoint why you believe so, assuming that others will have that belief is not that safe.
One could also approach the problem another way. Assume that bacteria is non-concious. Then if you add a “mere technical ability” like shell made of different chemical that doesn’t grant the off-spring conciousnes. Then stuff like “have an opposable thumb” also doesn’t feel like it would make the difference in conciousness. If all the feature adds are of this nature you can go up all the way to humans. But humans seem to be concious and we have arrived at a contradiction. You can make a similar reasoning starting with weak AI and adding features to end up to emulated brains. If none of the steps can grant conciousness then surely there is no reason the end result has it. If these chains work differently what is the relevant difference that makes one go throught and the other not?
Eating the bullet and saying that bacterial suffering matters is not that terrible.
If you have a button that would kill all the bacteria and somebody pushed it would that be a morally neutral act? One you might figth this hypothetical by saying that killing all bacteria also means killing quite a lot of humans. So isolate it somehow. Say there is a alien planet that does not have more complex life than bacterial life that is native to that planet. If somebody nuked that planet and said “Do not worry bacteria are not concious so this is not a shady act” would that be a sufficient reason to stop pondering the ethics of it?
Say there is a alien planet that does not have more complex life than bacterial life that is native to that planet. If somebody nuked that planet and said “Do not worry bacteria are not concious so this is not a shady act” would that be a sufficient reason to stop pondering the ethics of it?
My gut feeling is that nuking a planet for no reason is quite shady on its own, even if we take for granted that no life forms whatsoever are present on that planet. Of course, now we could ask “what about a smaller planet?” starting a new chain that will terminate with the conclusion that destroying a pebble for no reason is still shady.
Thinking about it, I surely don’t care about pebbles, but the idea of someone enthusiastically smashing pebbles with violence still leaves me with a vague sense of wrongness (probably it’s just the idea of smashing things for no reason that feels shady).
Taking a thing as assumed and then dwelwing in the question of “why believe it?” seems contradictory.
You are implicitly assuming that the reason why we think humans are concious has something to do with evolution. The stepping is not explicit so hard to vet for errors.
You can deduce that “great capabilities” will tend to preserve forward but this can not be used for backwards chaining. “Which came first chicken or the egg?” can have the answer of “egg came first from a non-chicken” because “eggs come from chickens” is not unconditionally true. From the child being concious we can not neccesarily deduce that some of the parents would have had the property.
If you can trace why you believe that other humans are concious you can try to aim that process at animals and see whether there is a blip. If you can’t pinpoint why you believe so, assuming that others will have that belief is not that safe.
One could also approach the problem another way. Assume that bacteria is non-concious. Then if you add a “mere technical ability” like shell made of different chemical that doesn’t grant the off-spring conciousnes. Then stuff like “have an opposable thumb” also doesn’t feel like it would make the difference in conciousness. If all the feature adds are of this nature you can go up all the way to humans. But humans seem to be concious and we have arrived at a contradiction. You can make a similar reasoning starting with weak AI and adding features to end up to emulated brains. If none of the steps can grant conciousness then surely there is no reason the end result has it. If these chains work differently what is the relevant difference that makes one go throught and the other not?
Eating the bullet and saying that bacterial suffering matters is not that terrible.
If you have a button that would kill all the bacteria and somebody pushed it would that be a morally neutral act? One you might figth this hypothetical by saying that killing all bacteria also means killing quite a lot of humans. So isolate it somehow. Say there is a alien planet that does not have more complex life than bacterial life that is native to that planet. If somebody nuked that planet and said “Do not worry bacteria are not concious so this is not a shady act” would that be a sufficient reason to stop pondering the ethics of it?
My gut feeling is that nuking a planet for no reason is quite shady on its own, even if we take for granted that no life forms whatsoever are present on that planet. Of course, now we could ask “what about a smaller planet?” starting a new chain that will terminate with the conclusion that destroying a pebble for no reason is still shady.
Thinking about it, I surely don’t care about pebbles, but the idea of someone enthusiastically smashing pebbles with violence still leaves me with a vague sense of wrongness (probably it’s just the idea of smashing things for no reason that feels shady).