The copy problem is also irrelevant for utilitarians, since all persons should be weighted equally under most utilitarian moral theories.
It’s only an issue for self-interested actors. So, if spurs A and B both agree that A is C and B is C, that still doesn’t help. Are the converse) statements true? A selfish C will base their decisions on whether C is A.
I tend to view this as another nail in the coffin of ethical egoism. I lean toward just putting a certain value on each point in mind-space, with high value for human-like minds, and smaller or zero value on possible agents which don’t pique our moral impulses.
The copy problem is also irrelevant for utilitarians, since all persons should be weighted equally under most utilitarian moral theories.
It’s only an issue for self-interested actors. So, if spurs A and B both agree that A is C and B is C, that still doesn’t help. Are the converse) statements true? A selfish C will base their decisions on whether C is A.
I tend to view this as another nail in the coffin of ethical egoism. I lean toward just putting a certain value on each point in mind-space, with high value for human-like minds, and smaller or zero value on possible agents which don’t pique our moral impulses.