I know there’s a lot of diversity in brain-space, but there’s not so much that you couldn’t find 100,000+ people with a nearly identical motivational system. [Rest of argument omitted]
Unless I misunderstand, your argument works only for those goals held by pjeby that do not refer to pjeby. For example, would you really pay pjeby $500 / mo to make pjeby’s wife happier (as opposed to making your own wife happier)?
Or is making one’s wife happier “simply a desire” in your terminology?
Exactly. It’s not really a goal when you don’t care about the results. If the dominating term in your decision to do something is that the result be YOURS (i.e., profit being created in YOUR bank account, YOU making your wife happy, credit for YOU achieving something, etc), you might as well just call it “shit you want to be yours”.
Most people are referenced in all their “goals”. But that’s because most people don’t actually have goals in any meaningful sense beyond wanting a ton of shit to be theirs. If you notice that most all your goals wouldn’t be desirable if they didn’t include you, maybe you should look into actually finding something you care about besides yourself. I know you can do it—heck, even most PUAs end up caring about things outside of themselves (after they try everything else first and it doesn’t work).
Just remember, if it’s actually a goal, you wouldn’t care who achieved it and you would gladly welcome more effective or efficient ways to achieve it… including other people doing it in place of you.
Just remember, if it’s actually a goal, you wouldn’t care who achieved it and you would gladly welcome more effective or efficient ways to achieve it… including other people doing it in place of you.
This has even more weight if you accept that the algorithm embodied by ‘you’ is probabilistically extremely similar to other algorithms out there in the multiverse, with no easy way to distinguish between them in any meaningful sense. So even when you have preferences over ‘your’ brain states corresponding to ‘you’ being satisfied outside of any external accomplishments getting achieved, there’s still a philosophical arbitrarity in fulfilling ‘your’ preferences instead of anyone else’s that I’d bet leads to decision theoretic spaciotemporal inconsistency in a way it’d be difficult for me cache out right now.
(In practice humans can’t even come close to avoiding such conundrums but it seems best to be aware that such a higher standard of decision theoretic and philosophical optimality exists.)
Unless I misunderstand, your argument works only for those goals held by pjeby that do not refer to pjeby. For example, would you really pay pjeby $500 / mo to make pjeby’s wife happier (as opposed to making your own wife happier)?
Or is making one’s wife happier “simply a desire” in your terminology?
Exactly. It’s not really a goal when you don’t care about the results. If the dominating term in your decision to do something is that the result be YOURS (i.e., profit being created in YOUR bank account, YOU making your wife happy, credit for YOU achieving something, etc), you might as well just call it “shit you want to be yours”.
Most people are referenced in all their “goals”. But that’s because most people don’t actually have goals in any meaningful sense beyond wanting a ton of shit to be theirs. If you notice that most all your goals wouldn’t be desirable if they didn’t include you, maybe you should look into actually finding something you care about besides yourself. I know you can do it—heck, even most PUAs end up caring about things outside of themselves (after they try everything else first and it doesn’t work).
Just remember, if it’s actually a goal, you wouldn’t care who achieved it and you would gladly welcome more effective or efficient ways to achieve it… including other people doing it in place of you.
This has even more weight if you accept that the algorithm embodied by ‘you’ is probabilistically extremely similar to other algorithms out there in the multiverse, with no easy way to distinguish between them in any meaningful sense. So even when you have preferences over ‘your’ brain states corresponding to ‘you’ being satisfied outside of any external accomplishments getting achieved, there’s still a philosophical arbitrarity in fulfilling ‘your’ preferences instead of anyone else’s that I’d bet leads to decision theoretic spaciotemporal inconsistency in a way it’d be difficult for me cache out right now.
(In practice humans can’t even come close to avoiding such conundrums but it seems best to be aware that such a higher standard of decision theoretic and philosophical optimality exists.)