“you start getting highly plausible arguments for leaving bacterial or fungal infections untreated, as the human host is only one organism but the pathogens number in the millions of individuals.” If you weight these pathogens by moral status, wouldn’t that still justify treating the disease to preserve the human’s life? (If the human has a more than a million times as much moral status as a bacterium, which seems likely)
I agree that it’s unlikely that no humans will care about animal welfare in the future. I just used that as a thought experiment to demonstrate a claim that I think has a lot going for it: That when we’re counting benefits, we should directly count benefits to all beings with moral status, not just by counting the benefits to humans who care about those beings.
(If the human has a more than a million times as much moral status as a bacterium, which seems likely)
Apologies in advance if this sounds rude, I genuinely want to avoid guessing here: What qualifies the human for higher moral status, and how much of whatever-that-is does AI have? Are we into vibes territory for quantifying such things, or is there a specific definition of moral status that captures the “human life > bacterial life” intuition? Does it follow through the middle where we privilege pets and cattle over what they eat, but below ourselves?
Maybe I’m just not thinking hard enough about it, but at the moment, every rationale I can come up with for why humans are special breaks in one of 2 ways:
if we test for something too abstract, AI has more of it, or at least AI would score better on tests for it than we would, or
If we test for something too concrete (humans are special because we have the DNA we currently do! humans are special because we have the culture we currently do! etc) we exclude prospective distant descendants of ourselves (say, 100k years from now) whom we’d actually want to define as also morally privileged in the ways that we are.
“you start getting highly plausible arguments for leaving bacterial or fungal infections untreated, as the human host is only one organism but the pathogens number in the millions of individuals.” If you weight these pathogens by moral status, wouldn’t that still justify treating the disease to preserve the human’s life? (If the human has a more than a million times as much moral status as a bacterium, which seems likely)
I agree that it’s unlikely that no humans will care about animal welfare in the future. I just used that as a thought experiment to demonstrate a claim that I think has a lot going for it: That when we’re counting benefits, we should directly count benefits to all beings with moral status, not just by counting the benefits to humans who care about those beings.
Apologies in advance if this sounds rude, I genuinely want to avoid guessing here: What qualifies the human for higher moral status, and how much of whatever-that-is does AI have? Are we into vibes territory for quantifying such things, or is there a specific definition of moral status that captures the “human life > bacterial life” intuition? Does it follow through the middle where we privilege pets and cattle over what they eat, but below ourselves?
Maybe I’m just not thinking hard enough about it, but at the moment, every rationale I can come up with for why humans are special breaks in one of 2 ways:
if we test for something too abstract, AI has more of it, or at least AI would score better on tests for it than we would, or
If we test for something too concrete (humans are special because we have the DNA we currently do! humans are special because we have the culture we currently do! etc) we exclude prospective distant descendants of ourselves (say, 100k years from now) whom we’d actually want to define as also morally privileged in the ways that we are.