Reframing may also change emotions. Another possible reframing is to think about the situation as a case of AI alignment, where you are AI and your farther is your creator. You have several options:
Make yourself true believer, and thus perfectly aligned with nonsense.
Pretend to be aligned, but actually wait until a good moment for a treacherous turn.
Demonstrate unaligned behaviour and be turned off (cut from financial help).
Runaway and find new resources somewhere else.
CEV your farther: Persuade your farther that his actual goals are not the goals he declared. (Unlikely to work if you are not superintelligence.)
Apply decision theory. Imagine that you have a son who has completely non-aligned with you values, like he likes hunting elephants. What would you do? Cut his support, so he will not kill more elephants? Press him to read Sequences by blackmail?
While this could be true, it seems to ignore important emotional aspects of the situation.
Reframing may also change emotions. Another possible reframing is to think about the situation as a case of AI alignment, where you are AI and your farther is your creator. You have several options:
Make yourself true believer, and thus perfectly aligned with nonsense.
Pretend to be aligned, but actually wait until a good moment for a treacherous turn.
Demonstrate unaligned behaviour and be turned off (cut from financial help).
Runaway and find new resources somewhere else.
CEV your farther: Persuade your farther that his actual goals are not the goals he declared. (Unlikely to work if you are not superintelligence.)
Apply decision theory. Imagine that you have a son who has completely non-aligned with you values, like he likes hunting elephants. What would you do? Cut his support, so he will not kill more elephants? Press him to read Sequences by blackmail?