I struggled with this idea of a “root goal” the primary function of my life that would give order to all other subgoals and I eventually settled on “to be a good human being”, as unsatisfactory as that is, because I found no meaningful or fulfilling progress in existential questions of this nature.
I have dealt with something similar, and it may or may not be what you are experiencing. When I was first getting into rationality I was able to better see what was motivating a lot of the decisions I made (approval of others, striving for a hero archtype because of the books I read as a kid, other stuff) and I didn’t like it. For a long while I was super suspicious of every single want and impulse, wondering if it was “truly valid”. I spent a while trying to find a “root goal” that was such an Awesome and Virtuous Goal that I could be fully justified in pursuing it.
Now, I see that I was trying to “set my values to universal values”, which upon reflection doesn’t seem to be a coherent notion. If only Eliezer had included a sequence on how this was a bad idea, then maybe I could have saved time ;) (jokes aside, the linked to sequence has a lot of careful reasoning about what exactly goes wrong in the process of trying to find “the best and perfect values”)
Welcome to LW!
I have dealt with something similar, and it may or may not be what you are experiencing. When I was first getting into rationality I was able to better see what was motivating a lot of the decisions I made (approval of others, striving for a hero archtype because of the books I read as a kid, other stuff) and I didn’t like it. For a long while I was super suspicious of every single want and impulse, wondering if it was “truly valid”. I spent a while trying to find a “root goal” that was such an Awesome and Virtuous Goal that I could be fully justified in pursuing it.
Now, I see that I was trying to “set my values to universal values”, which upon reflection doesn’t seem to be a coherent notion. If only Eliezer had included a sequence on how this was a bad idea, then maybe I could have saved time ;) (jokes aside, the linked to sequence has a lot of careful reasoning about what exactly goes wrong in the process of trying to find “the best and perfect values”)