I agree. I think for me, the intuition mostly stems from neuron count. I also agree, with the authors of the sequence, that neuron counts are not an ideal metric. What confuses me is that instead these estimates seem to simply take biologic “individuals” as a basic unit for moral weight and then adjust with uncertainty from there. I think that seems even more misguided than neuron count.
Bees and Ants are hacking the “individual”-metric just by having small brains spread over lots of individual bees/ants. Beehive > Human seems absurd.
I agree with the first, but not the second sentence. (Although I am not sure what it is supposed to imply. I can imagine being convinced Shrimps are more important, but the bar for evidence is pretty high)
I mean that my end goals point towards a vague prospering of human-like minds, with a special preference for people close to me. It aligns with morality often, but not always. If morality requires I sacrifice things I actually care about for carp, I would discard morality with no hesitation.
I mean that my end goals point towards a vague prospering of human-like minds, with a special preference for people close to me. It aligns with morality often, but not always.
What remains? I think this is basically what I usually think of as my own MoralityMorpheus (it just happens to contain a term with everyone else’s). Are you sacrificing what other people think ‘the right thing to do’ is? What future you think what the right thing to do would have been? What future uplifted Koi think what the right thing to do would have been?
If your moral theory gives humanity less moral worth than carp, so much the worse for your moral theory.
If morality as a concept irrefutably proves it, then so much the worse for morality.
I will add that even taking humans aside, the remaining comparisons seem still quite bonkers to me. 1 carp ~ 1 bee sounds really strange.
I agree. I think for me, the intuition mostly stems from neuron count. I also agree, with the authors of the sequence, that neuron counts are not an ideal metric. What confuses me is that instead these estimates seem to simply take biologic “individuals” as a basic unit for moral weight and then adjust with uncertainty from there. I think that seems even more misguided than neuron count. Bees and Ants are hacking the “individual”-metric just by having small brains spread over lots of individual bees/ants. Beehive > Human seems absurd.
I agree with the first, but not the second sentence. (Although I am not sure what it is supposed to imply. I can imagine being convinced Shrimps are more important, but the bar for evidence is pretty high)
I mean that my end goals point towards a vague prospering of human-like minds, with a special preference for people close to me. It aligns with morality often, but not always. If morality requires I sacrifice things I actually care about for carp, I would discard morality with no hesitation.
What remains? I think this is basically what I usually think of as my own MoralityMorpheus (it just happens to contain a term with everyone else’s). Are you sacrificing what other people think ‘the right thing to do’ is? What future you think what the right thing to do would have been? What future uplifted Koi think what the right thing to do would have been?