Here is my attempt to convince you also of 1 (in your numbering):
I disagree with your: “From a preference utilitarian Perspective, only a self-conscious being can have preferences for the future, therefore you can only violate the preferences of a self-conscious being by killing it.”
To the contrary, every agent which follows an optimization goal exhibits some preference (even if itself does not understand them). Namely that its optimization goal shall be reached. The ability to understand ones own optimization goal is not necessary for a preference to be morally relevant, otherwise babies and even unconscious people would not have moral weight. (And even non-sleeping people don’t understand all their optimization goals.)
This leaves the problem of how to weight various agents. A solution which gives equal weight “per agent” has ugly consequences (because we should all immediately take immunosuppressants to save the bacteria) and is ill-defined, because many systems allow multiple ways to count “agents” (each cell has equal weight? each organ? each human? each family? each company? each species? each gene allele?).
A decent solution seems to be to take computing power (alternatively: the ability to reach the optimization goals) of the system exhibiting optimizing behavior as a “weight” (If only for game-theoretic reasions; it certainly makes sense to value preferences of extremly powerful optimizers strongly). Unfortunately, there is no clear scale of “computing power” one can calculate with. Extrapolating from intuition gives us a trivial weight for bacterias’ goals and a weight near our own for the goals of other humans. In the concrete context of killing animals to obtain meat, it should be observed that animals are generally rather capable of reaching their goals in the wild (e.g. getting food, spawning offspring) - better than human children, I’d say.
Here is my attempt to convince you also of 1 (in your numbering):
I disagree with your: “From a preference utilitarian Perspective, only a self-conscious being can have preferences for the future, therefore you can only violate the preferences of a self-conscious being by killing it.”
To the contrary, every agent which follows an optimization goal exhibits some preference (even if itself does not understand them). Namely that its optimization goal shall be reached. The ability to understand ones own optimization goal is not necessary for a preference to be morally relevant, otherwise babies and even unconscious people would not have moral weight. (And even non-sleeping people don’t understand all their optimization goals.)
This leaves the problem of how to weight various agents. A solution which gives equal weight “per agent” has ugly consequences (because we should all immediately take immunosuppressants to save the bacteria) and is ill-defined, because many systems allow multiple ways to count “agents” (each cell has equal weight? each organ? each human? each family? each company? each species? each gene allele?).
A decent solution seems to be to take computing power (alternatively: the ability to reach the optimization goals) of the system exhibiting optimizing behavior as a “weight” (If only for game-theoretic reasions; it certainly makes sense to value preferences of extremly powerful optimizers strongly). Unfortunately, there is no clear scale of “computing power” one can calculate with. Extrapolating from intuition gives us a trivial weight for bacterias’ goals and a weight near our own for the goals of other humans. In the concrete context of killing animals to obtain meat, it should be observed that animals are generally rather capable of reaching their goals in the wild (e.g. getting food, spawning offspring) - better than human children, I’d say.