I think it’s valuable to study rationality and AI alignment (with a touch of programming) for the purpose of preparing to take advantage of post-AGI personal growth opportunities, without destroying your own extrapolated volition. This is relevant in case we survive, which I think is not unlikely (while the unlikely good outcome is that we keep cosmic endowment; the more likely alternative is being allowed to live on relatively tiny welfare, while the rest is taken away).
I think about my young daughters’ lives a lot. One says she wants to be an artist. Another a teacher.
Do those careers make any sense on a timeframe of the next 20 years?
What interests and careers do I encourage in them that will become useless at the slowest rate?
I think about this a lot—and then I mainly do nothing about it, and just encourage them to pursue whatever they like anyway.
I think it’s valuable to study rationality and AI alignment (with a touch of programming) for the purpose of preparing to take advantage of post-AGI personal growth opportunities, without destroying your own extrapolated volition. This is relevant in case we survive, which I think is not unlikely (while the unlikely good outcome is that we keep cosmic endowment; the more likely alternative is being allowed to live on relatively tiny welfare, while the rest is taken away).