Take for example an agent that is facing the Prisoner’s dilemma. Such an agent might originally tend to cooperate and only after learning about game theory decide to defect and gain a greater payoff. Was it rational for the agent to learn about game theory, in the sense that it helped the agent to achieve its goal or in the sense that it deleted one of its goals in exchange for a allegedly more “valuable” goal?
The agent’s goals aren’t changing due to increased rationality, but just because the agent confused him/herself. Even if this is a payment-in-utilons and no-secondary-consequences Dilemma, it can still be rational to cooperate if you expect the other agent will be spending the utilons in much the same way. If this is a more down-to-earth Prisoner’s Dilemma, shooting for cooperate/cooperate to avoid dicking over the other agent is a perfectly rational solution that no amount of game theory can dissuade you from. Knowledge of game theory here can only change your mind if it shows you a better way to get what you already want, or if you confuse yourself reading it and think defecting is the ‘rational’ thing to do without entirely understanding why.
You describe a lot of goals as terminal that I would describe as instrumental, even in their limited context. While it’s true that our ideals will be susceptible to culture up until (if ever) we can trace and order every evolutionary desire in an objective way, not many mathematicians would say “I want to determine if a sufficiently-large randomized Conway board would converge to an all-off state so I will have determined if a sufficiently-large randomized Conway board would converge to an all-off state”. Perhaps they find it an interesting puzzle or want status from publishing it, but there’s certainly a higher reason than ‘because they feel it’s the right thing to do’. No fundamental change in priorities needs occur between feeding one’s tribe and solving abstract mathematical problems.
I won’t extrapolate my arguments farther than this, since I really don’t have the philosophical background it needs.
The agent’s goals aren’t changing due to increased rationality, but just because the agent confused him/herself. Even if this is a payment-in-utilons and no-secondary-consequences Dilemma, it can still be rational to cooperate if you expect the other agent will be spending the utilons in much the same way. If this is a more down-to-earth Prisoner’s Dilemma, shooting for cooperate/cooperate to avoid dicking over the other agent is a perfectly rational solution that no amount of game theory can dissuade you from. Knowledge of game theory here can only change your mind if it shows you a better way to get what you already want, or if you confuse yourself reading it and think defecting is the ‘rational’ thing to do without entirely understanding why.
You describe a lot of goals as terminal that I would describe as instrumental, even in their limited context. While it’s true that our ideals will be susceptible to culture up until (if ever) we can trace and order every evolutionary desire in an objective way, not many mathematicians would say “I want to determine if a sufficiently-large randomized Conway board would converge to an all-off state so I will have determined if a sufficiently-large randomized Conway board would converge to an all-off state”. Perhaps they find it an interesting puzzle or want status from publishing it, but there’s certainly a higher reason than ‘because they feel it’s the right thing to do’. No fundamental change in priorities needs occur between feeding one’s tribe and solving abstract mathematical problems.
I won’t extrapolate my arguments farther than this, since I really don’t have the philosophical background it needs.