I have a few more objections I didn’t cover in my last comment because I hadn’t thoroughly thought them out yet.
Those of you who are operating under the assumption that we are maximizing a utility function with evolved terminal goals, should I think admit these terminal goals all involve either ourselves, or our genes.
No, these terminal goals can also involve other people and the state of the world, even if they are evolved. There are several reasons human consciousnesses might have evolved goals that do not involve themselves or their genes. The most obvious one is that an entity that only values itself and its genes only is far less trustworthy than one that values other people as ends in themselves, and hence would have difficult getting other entities to engage in positive sum games with it. Evolving to value other people makes it possible for other people who might prove useful trading partners to trust the agent in question, since they know it won’t betray them the instant they have outlived their usefulness.
Another obvious one is kin selection. Evolution metaphorically “wants” us to value our relatives since they share some of our genes. But rather than waste energy developing some complex adaptation to determine how many genes you share with someone, it took a simpler route and just made us value people we grew up around.
And no, the fact that I know my altruism and love for others evolved for game theoretic reasons does not make it any less wonderful and any less morally right.
If they involve our genes, they they are goals that our bodies are pursuing, that we call errors, not goals, when we the conscious agent inside our bodies evaluate them.
Again, it is quite possible for a conscious agent to value things other than itself, but not value the goals of evolution or its genes. There are many errors that our bodies make that occur because they involve our genes, not our real goals. But valuing other people and the future is not one of them, it is an intrinsic part of the makeup of the conscious agent part.
Averaging value systems is worse than choosing one.… The point is that the CEV plan of “averaging together” human values will result in a set of values that is worse (more self-contradictory) than any of the value systems it was derived from.
Alan Carter, who is rapidly becoming my favorite living philosopher, explains here how it is quite possible to have a pluralistic metaethics without being incoherent. His main argument is that as long as you hold values to be incremental rather than absolute, it is possible to trade one off against the other without being incoherent.
I have a few more objections I didn’t cover in my last comment because I hadn’t thoroughly thought them out yet.
No, these terminal goals can also involve other people and the state of the world, even if they are evolved. There are several reasons human consciousnesses might have evolved goals that do not involve themselves or their genes. The most obvious one is that an entity that only values itself and its genes only is far less trustworthy than one that values other people as ends in themselves, and hence would have difficult getting other entities to engage in positive sum games with it. Evolving to value other people makes it possible for other people who might prove useful trading partners to trust the agent in question, since they know it won’t betray them the instant they have outlived their usefulness.
Another obvious one is kin selection. Evolution metaphorically “wants” us to value our relatives since they share some of our genes. But rather than waste energy developing some complex adaptation to determine how many genes you share with someone, it took a simpler route and just made us value people we grew up around.
And no, the fact that I know my altruism and love for others evolved for game theoretic reasons does not make it any less wonderful and any less morally right.
Again, it is quite possible for a conscious agent to value things other than itself, but not value the goals of evolution or its genes. There are many errors that our bodies make that occur because they involve our genes, not our real goals. But valuing other people and the future is not one of them, it is an intrinsic part of the makeup of the conscious agent part.
Alan Carter, who is rapidly becoming my favorite living philosopher, explains here how it is quite possible to have a pluralistic metaethics without being incoherent. His main argument is that as long as you hold values to be incremental rather than absolute, it is possible to trade one off against the other without being incoherent.