The point of the fox and rabbit comment was to illustrate how agents with different utility functions might be usefully said to disagree—i.e. they can exhibit disagreement behaviour, such as arguing.
If you don’t think foxes and rabbits are moral agents—and so are more like rocks than people, then I think you may be under-estimating their social lives—but more importantly, you need to substitute agents you do regard as moral agents into the example in order to make sense of the example—e.g. choose two separate alien races. Or choose gorillas and chimps.
Agent with different utility functions need not disagree about facts. But they may well disagree over issues such as how resources ought to be allocated—and resource allocation can be a moral issue, e.g. when it results in deaths.
I suppose it could be objected that such agents would not actually argue. They could recognise that they had a fundamental difference in goals, and therefore arguing would be pointless, due to a lack of common premises. However, my expectation is that there would be actual arguments and disagreements as a result—similar to those that occur today over borders and oil.
I note that I have a different perspective on the pebble sorters as well: Eliezer argues that the pebble sorters are not moral agents. Maybe because their endpoints have nothing to do with morality.
However, the pebble sorters are an optimisation process—at least to the extent that they prefer larger piles. I see no reason why they should not estabish a complex society, and eventually develop space travel and interstellar flight—in their quest to get hold of more rocks. I.e. though the pebble sorters have no moral terminal values, they may well develop moral instrumental values—in the process of developing the cooperative society needed to support their mining operations.
The point of the fox and rabbit comment was to illustrate how agents with different utility functions might be usefully said to disagree—i.e. they can exhibit disagreement behaviour, such as arguing.
If you don’t think foxes and rabbits are moral agents—and so are more like rocks than people, then I think you may be under-estimating their social lives—but more importantly, you need to substitute agents you do regard as moral agents into the example in order to make sense of the example—e.g. choose two separate alien races. Or choose gorillas and chimps.
Agent with different utility functions need not disagree about facts. But they may well disagree over issues such as how resources ought to be allocated—and resource allocation can be a moral issue, e.g. when it results in deaths.
I suppose it could be objected that such agents would not actually argue. They could recognise that they had a fundamental difference in goals, and therefore arguing would be pointless, due to a lack of common premises. However, my expectation is that there would be actual arguments and disagreements as a result—similar to those that occur today over borders and oil.
I note that I have a different perspective on the pebble sorters as well: Eliezer argues that the pebble sorters are not moral agents. Maybe because their endpoints have nothing to do with morality.
However, the pebble sorters are an optimisation process—at least to the extent that they prefer larger piles. I see no reason why they should not estabish a complex society, and eventually develop space travel and interstellar flight—in their quest to get hold of more rocks. I.e. though the pebble sorters have no moral terminal values, they may well develop moral instrumental values—in the process of developing the cooperative society needed to support their mining operations.