Rereading this, I’m sorry for dumping all of these objections on you as once (and especially if I sounded like they were obvious). I did actually think about an ethical system along the lines of the one you propose for O(6 months), and tried a variety of different ways to fix it, before regretfully abandoning it as unworkable.
On the non-equal moral weight version, see if you can find one that doesn’t give the AIs perverse incentives to mess with ecosystems. I couldn’t, but the closest I found involved species average adult mass (because biamass is roughly conserved), probability of reaching adulthood (r-strategy species are a nightmare), and average adult synapse count, My advice is that making anything logarithmic feels appealing but never seems to work.
Rereading this, I’m sorry for dumping all of these objections on you as once (and especially if I sounded like they were obvious). I did actually think about an ethical system along the lines of the one you propose for O(6 months), and tried a variety of different ways to fix it, before regretfully abandoning it as unworkable.
On the non-equal moral weight version, see if you can find one that doesn’t give the AIs perverse incentives to mess with ecosystems. I couldn’t, but the closest I found involved species average adult mass (because biamass is roughly conserved), probability of reaching adulthood (r-strategy species are a nightmare), and average adult synapse count, My advice is that making anything logarithmic feels appealing but never seems to work.