when I say ‘I endorse egoism’ in that sense I’m really endorsing two contradictory goals, not a single goal: (1) An overarching goal to have my personal desires met; (2) An overarching goal that every person act in whatever way ey expects to meet eir desires
The two goals don’t conflict, or, more precisely, (2) isn’t a goal, it’s a decision rule. There is no conflict in having the goal of having your personal desires met and believing that the correct decision rule is to do whatever maximizes the fulfillment of one’s own desires. It’s similar to how in the prisoner’s dilemma, each prisoner wants the other to cooperate, but doesn’t believe that the other prisoner should cooperate.
There is no conflict in having the goal of having your personal desires met and believing that the correct decision rule is to do whatever maximizes the fulfillment of one’s own desires.
I think it depends on what’s meant by ‘correct decision rule’. Suppose I came up to you and said that intuitionistic mathematics is ‘correct’, and conventional mathematics is ‘incorrect’; but not in virtue of correspondence to any non-physical mathematical facts; and conventional mathematics is what I want people to use; and using conventional mathematics, and treating it as correct, furthers other everyone else’s goals more too; and there is no deeper underlying rule that rationally commits anyone to saying that intuitionistic mathematics is correct. What then is the content of saying that intuitionistic mathematics is right and conventional is wrong?
It’s similar to how in the prisoner’s dilemma, each prisoner wants the other to cooperate, but doesn’t believe that the other prisoner should cooperate.
I don’t think the other player will cooperate, if I think the other player is best modeled as a rational agent. I don’t know what it means to add to that that the other player ‘shouldn’t cooperate. If I get into a PD with a non-sentient Paperclip Maximizer, I might predict that it will defect, but there’s no normative demand that it do so. I don’t think that it should maximize paperclips, and if a bolt of lightning suddenly melted part of its brain and made it better at helping humans than at making paperclips, I wouldn’t conclude that this was a bad or wrong or ‘incorrect’ thing, though it might be a thing that makes my mental model of the erstwhile paperclipper more complicated.
Sorry, I don’t know much about the philosophy of mathematics, so your analogy goes over my head.
I don’t know what it means to add to that that the other player ’shouldn’t cooperate.
It means that it is optimal for the other player to defect, from the other player’s point of view, if they’re following the same decision rule that you’re following. Given that you’ve endorsed this decision rule to yourself, you have no grounds on which to say that others shouldn’t use it as well. If the other player chooses to cooperate, I would be happy because my preferences would have been fulfilled more than they would have been had he defected, but I would also judge that he had acted suboptimally, i.e. in a way he shouldn’t have.
The two goals don’t conflict, or, more precisely, (2) isn’t a goal, it’s a decision rule. There is no conflict in having the goal of having your personal desires met and believing that the correct decision rule is to do whatever maximizes the fulfillment of one’s own desires. It’s similar to how in the prisoner’s dilemma, each prisoner wants the other to cooperate, but doesn’t believe that the other prisoner should cooperate.
I think it depends on what’s meant by ‘correct decision rule’. Suppose I came up to you and said that intuitionistic mathematics is ‘correct’, and conventional mathematics is ‘incorrect’; but not in virtue of correspondence to any non-physical mathematical facts; and conventional mathematics is what I want people to use; and using conventional mathematics, and treating it as correct, furthers other everyone else’s goals more too; and there is no deeper underlying rule that rationally commits anyone to saying that intuitionistic mathematics is correct. What then is the content of saying that intuitionistic mathematics is right and conventional is wrong?
I don’t think the other player will cooperate, if I think the other player is best modeled as a rational agent. I don’t know what it means to add to that that the other player ‘shouldn’t cooperate. If I get into a PD with a non-sentient Paperclip Maximizer, I might predict that it will defect, but there’s no normative demand that it do so. I don’t think that it should maximize paperclips, and if a bolt of lightning suddenly melted part of its brain and made it better at helping humans than at making paperclips, I wouldn’t conclude that this was a bad or wrong or ‘incorrect’ thing, though it might be a thing that makes my mental model of the erstwhile paperclipper more complicated.
Sorry, I don’t know much about the philosophy of mathematics, so your analogy goes over my head.
It means that it is optimal for the other player to defect, from the other player’s point of view, if they’re following the same decision rule that you’re following. Given that you’ve endorsed this decision rule to yourself, you have no grounds on which to say that others shouldn’t use it as well. If the other player chooses to cooperate, I would be happy because my preferences would have been fulfilled more than they would have been had he defected, but I would also judge that he had acted suboptimally, i.e. in a way he shouldn’t have.