Just remember, if it’s actually a goal, you wouldn’t care who achieved it and you would gladly welcome more effective or efficient ways to achieve it… including other people doing it in place of you.
This has even more weight if you accept that the algorithm embodied by ‘you’ is probabilistically extremely similar to other algorithms out there in the multiverse, with no easy way to distinguish between them in any meaningful sense. So even when you have preferences over ‘your’ brain states corresponding to ‘you’ being satisfied outside of any external accomplishments getting achieved, there’s still a philosophical arbitrarity in fulfilling ‘your’ preferences instead of anyone else’s that I’d bet leads to decision theoretic spaciotemporal inconsistency in a way it’d be difficult for me cache out right now.
(In practice humans can’t even come close to avoiding such conundrums but it seems best to be aware that such a higher standard of decision theoretic and philosophical optimality exists.)
This has even more weight if you accept that the algorithm embodied by ‘you’ is probabilistically extremely similar to other algorithms out there in the multiverse, with no easy way to distinguish between them in any meaningful sense. So even when you have preferences over ‘your’ brain states corresponding to ‘you’ being satisfied outside of any external accomplishments getting achieved, there’s still a philosophical arbitrarity in fulfilling ‘your’ preferences instead of anyone else’s that I’d bet leads to decision theoretic spaciotemporal inconsistency in a way it’d be difficult for me cache out right now.
(In practice humans can’t even come close to avoiding such conundrums but it seems best to be aware that such a higher standard of decision theoretic and philosophical optimality exists.)