Okay, let’s do that backwards planning exercise.
In the long run, I want to do my research but live a low stress and financially comfortable lifestyle. The traditional academic path won’t achieve that because I will end up doing my research but leading a high-stress and financially fraught lifestyle. There are three possible solutions to the problem, in rough order of preference A Pick a research agenda that is lucrative, so that I can supplement my income with lucrative consulting gigs and have a strong exit option B Learn to code and get a data science job, then do my research as a hobby C Get a government job related to my field (intelligence or aid)
Path A seems like the best one for both personal and EA reasons. Right now I split my time between writing on foreign investment and cabinet formation. But only the foreign investment work might pay the bills, the cabinet work ends with me in the brutal academia rat race. However, the foreign investment research might or might not succeed depending on contextual factors like competition, my ability to build a brand and the value of academic prestige in the field. So I should first try and figure out if the investment-academia path is satisfying.
I want to find out if that works over the next 6 months or so while in my academic program.
If the returns are too small and the competition too stressful, I should pivot toward a programming career. It’s a well-payed 40-hour industry, and I can do my research as a hobby for 8 hours a week. That sounds like a lovely life too. So if I pick that, I would deemphasize my research and focus on coding skills for interviews and building career capital there.
I’m satisfied with that plan. The next question is, how do I stick to it? More on this later.
Apologies if this has been said, but the reading level of this essay is stunningly high. I’ve read rationality A-Z and I can barely follow passages. For example
I think Yud means here is our genes had a base objective of reproducing themselves. The genes wanted their humans to make babies which were also reproductively fit. But “real-world bounded optimization process” produced humans that sought different things, like sexual pleasure and food and alliances with powerful peers. In the early environment that worked because sex lead to babies and food lead to healthy babies and alliances lead to protection for the babies. But once we built civilization we started having sex with birth control as an end in itself, even letting it distract us from the baby-making objectives. So the genes had this goal but the mesa-optimizer (humans) was only aligned in one environment. When the environment changed it lost alignment. We can expect the same to happen to our AI.
Okay, I think I get it. But there are so few people on the planet that can parse this passage.
Has someone written a more accessible version of this yet?