For anyone who does think that both 1) chickens have non-zero moral value, and 2) moral value is linearly additive, are you willing to bite the bullet that there exist a number of chickens such that it would be better to cause that many chickens to continue to exist at the expense of wiping out all other sentient life forever? This seems so obviously false and also so obviously the first thing to think of when considering 1 and 2 that I am confused there exist folks who accept 1 and 2
My position is “chickens have non-zero moral value, and moral value is not linearly additive.” That is, any additional chicken suffering is bad, any additional chicken having a pleasant life is good, and the total moral value of all chickens as the number of chickens approaches infinity is something like 1/3rd of a human
I think 1, and 2 in only a limited sense. I suspect that moral weight is very close to linear for small changes, what 99% of historical humanity (and maybe 98% of future humanity) experience, but diverges greatly when talking about extremes. So “shut up and multiply” works just fine for individual human-scale decisions, and linear calculations do very well in daily life. But I don’t accept any craziness from bizarre thought experiments.
For anyone who does think that both 1) chickens have non-zero moral value, and 2) moral value is linearly additive, are you willing to bite the bullet that there exist a number of chickens such that it would be better to cause that many chickens to continue to exist at the expense of wiping out all other sentient life forever? This seems so obviously false and also so obviously the first thing to think of when considering 1 and 2 that I am confused there exist folks who accept 1 and 2
My position is “chickens have non-zero moral value, and moral value is not linearly additive.” That is, any additional chicken suffering is bad, any additional chicken having a pleasant life is good, and the total moral value of all chickens as the number of chickens approaches infinity is something like 1/3rd of a human
I think 1, and 2 in only a limited sense. I suspect that moral weight is very close to linear for small changes, what 99% of historical humanity (and maybe 98% of future humanity) experience, but diverges greatly when talking about extremes. So “shut up and multiply” works just fine for individual human-scale decisions, and linear calculations do very well in daily life. But I don’t accept any craziness from bizarre thought experiments.