It’s important to be clear about what the goal is: if it’s the instrumental careerist goal “increase status to maximize the probability of joining a prestigious organization”, then that strategy may look very different from the terminal scientist goal of “reduce x-risk by doing technical AGI alignment work”. The former seems much more competitive than the latter.
I have multiple goals. My major abstract long term root wants are probably something like (in no particular order):
help the world, reduce existential risk, do the altruism thing
be liked by my ingroup (rationalists, EAs)
have outgroup prestige (for my parents, strangers, etc.)
have some close friends/a nice bf/gf
keep most of my moral integrity/maintain my identity a slightly edgy person
Finishing my startup before trying to work somewhere like lightcone or RR or (portions of) Anthropic feels like a pareto optimal frontier on those things, though I’m open to arguments that they’re not, and appreciated your comment.
I have multiple goals. My major abstract long term root wants are probably something like (in no particular order):
help the world, reduce existential risk, do the altruism thing
be liked by my ingroup (rationalists, EAs)
have outgroup prestige (for my parents, strangers, etc.)
have some close friends/a nice bf/gf
keep most of my moral integrity/maintain my identity a slightly edgy person
Finishing my startup before trying to work somewhere like lightcone or RR or (portions of) Anthropic feels like a pareto optimal frontier on those things, though I’m open to arguments that they’re not, and appreciated your comment.