There’s an effect that works in the opposite direction where you lower the hiring bar as headcount scales. Key early hires may have a more stringent filter applied to them than later additions. But the bar can still be arbitrarily high, look at the profiles of people who are joining recently, e.g Leaving Wave, joining Anthropic | benkuhn.net
It’s important to be clear about what the goal is: if it’s the instrumental careerist goal “increase status to maximize the probability of joining a prestigious organization”, then that strategy may look very different from the terminal scientist goal of “reduce x-risk by doing technical AGI alignment work”. The former seems much more competitive than the latter.
The following part will sound a little self-helpy, but hopefully it’ll be useful:
Concrete suggestion: this weekend, execute on some small tasks which satisfy the following constraints:
can’t be sold as being important or high impact.
won’t make it into the top 10 list of most impressive things you’ve ever done.
not necessarily aligned with your personal brand.
has relatively low value from an optics perspective.
high confidence of trivially low implementation complexity.
can be abandoned at zero reputational/relationship cost.
isn’t connected to a broader roadmap and high-level strategy.
requires minimal learning/overcoming insignificant levels of friction.
doesn’t feel intimidating or serious or psychologically uncomfortable.
Find the tasks in your notes after a period of physical exertion. Avoid searching the internet or digging deeply into your mind (anything you can characterize as paying constant attention to filtered noise to mitigate the risk that some decision relevant information managed to slip past your cognitive systems). Decline anything that spurs an instinct of anxious perfectionism. Understand where you are first and marginally shift towards your desired position.
You sound like someone who has a far larger max step size than ordinary people. You have the ability to get to places by making one big leap. But go to this simulation Why Momentum Really Works (distill.pub) and fix momentum at 0.99. What happens to the solution as you gradually move the step size slider to the right?
Chaotic divergence and oscillation.
Selling your startup to get into Anthropic seems, with all due respect, to be a plan with step count = 1. Recall Expecting Short Inferential Distances. Practicing adaptive dampening would let you more reliably plan and follow routes requiring step count > 1. To be fair, I can kinda see where you’re coming from, and logically it can be broken down into independent subcomponents that you work on in parallel, but the best advice I can concisely offer without more context on the details of your situation would be this:
It’s important to be clear about what the goal is: if it’s the instrumental careerist goal “increase status to maximize the probability of joining a prestigious organization”, then that strategy may look very different from the terminal scientist goal of “reduce x-risk by doing technical AGI alignment work”. The former seems much more competitive than the latter.
I have multiple goals. My major abstract long term root wants are probably something like (in no particular order):
help the world, reduce existential risk, do the altruism thing
be liked by my ingroup (rationalists, EAs)
have outgroup prestige (for my parents, strangers, etc.)
have some close friends/a nice bf/gf
keep most of my moral integrity/maintain my identity a slightly edgy person
Finishing my startup before trying to work somewhere like lightcone or RR or (portions of) Anthropic feels like a pareto optimal frontier on those things, though I’m open to arguments that they’re not, and appreciated your comment.
There’s an effect that works in the opposite direction where you lower the hiring bar as headcount scales. Key early hires may have a more stringent filter applied to them than later additions. But the bar can still be arbitrarily high, look at the profiles of people who are joining recently, e.g Leaving Wave, joining Anthropic | benkuhn.net
It’s important to be clear about what the goal is: if it’s the instrumental careerist goal “increase status to maximize the probability of joining a prestigious organization”, then that strategy may look very different from the terminal scientist goal of “reduce x-risk by doing technical AGI alignment work”. The former seems much more competitive than the latter.
The following part will sound a little self-helpy, but hopefully it’ll be useful:
Concrete suggestion: this weekend, execute on some small tasks which satisfy the following constraints:
can’t be sold as being important or high impact.
won’t make it into the top 10 list of most impressive things you’ve ever done.
not necessarily aligned with your personal brand.
has relatively low value from an optics perspective.
high confidence of trivially low implementation complexity.
can be abandoned at zero reputational/relationship cost.
isn’t connected to a broader roadmap and high-level strategy.
requires minimal learning/overcoming insignificant levels of friction.
doesn’t feel intimidating or serious or psychologically uncomfortable.
Find the tasks in your notes after a period of physical exertion. Avoid searching the internet or digging deeply into your mind (anything you can characterize as paying constant attention to filtered noise to mitigate the risk that some decision relevant information managed to slip past your cognitive systems). Decline anything that spurs an instinct of anxious perfectionism. Understand where you are first and marginally shift towards your desired position.
You sound like someone who has a far larger max step size than ordinary people. You have the ability to get to places by making one big leap. But go to this simulation Why Momentum Really Works (distill.pub) and fix momentum at 0.99. What happens to the solution as you gradually move the step size slider to the right?
Chaotic divergence and oscillation.
Selling your startup to get into Anthropic seems, with all due respect, to be a plan with step count = 1. Recall Expecting Short Inferential Distances. Practicing adaptive dampening would let you more reliably plan and follow routes requiring step count > 1. To be fair, I can kinda see where you’re coming from, and logically it can be broken down into independent subcomponents that you work on in parallel, but the best advice I can concisely offer without more context on the details of your situation would be this:
“Learn to walk”.
I have multiple goals. My major abstract long term root wants are probably something like (in no particular order):
help the world, reduce existential risk, do the altruism thing
be liked by my ingroup (rationalists, EAs)
have outgroup prestige (for my parents, strangers, etc.)
have some close friends/a nice bf/gf
keep most of my moral integrity/maintain my identity a slightly edgy person
Finishing my startup before trying to work somewhere like lightcone or RR or (portions of) Anthropic feels like a pareto optimal frontier on those things, though I’m open to arguments that they’re not, and appreciated your comment.