However, I think there is a group of people who over-optimize for Direction and neglect the Magnitude. Increasing Magnitude often comes with the risk of corrupting the Direction. For example, scaling fast often makes it difficult to hire only mission-aligned people, and it requires you to give voting power to investors that prioritizes profit. To increase Magnitude can therefore feel risky, what if I end up working at something that is net-negative for the world? Therefore it might be easier for one’s personal sanity to optimize for Direction, to do something that is unquestionably net-positive. But this is the easy way out, and if you want to have the highest expected value of your Impact, you cannot disregard Magnitude.
You talk here about an impact/direction v ambition/profit tradeoff. I’ve heard many other people talking about this tradeoff too. I think it’s overrated; in particular, if you’re constantly having to think about it, that’s a bad sign.
It’s rare that you have a continuous space of options between lots of impact and low profit, and low/negative impact and high profit.
If you do have such a continuous space of options then I think you are often just screwed and profit incentives will win.
The really important decision you make is probably a discrete choice: do you start an org trying to do X, or an org trying to do Y? Usually you can’t (and even if you can, shouldn’t) try to interpolate between these things, and making this high-level strategy call will probably shape your impact more than any later finetuning of parameters within that strategy.
Often, the profit incentives point towards the more-obvious, gradient-descent-like path, which is usually very crowded and leads to many “mediocre” outcomes (e.g. starting a $10M company), but the biggest things come from doing “Something Else Which Is Not That” (as is said in dath ilan). For example, SpaceX (ridiculously hard and untested business proposition) and Facebook (started out seeing very small and niche and with no clue of where the profit was).
Instead, I think the real value of doing things that are startup-like comes from:
The zero-to-one part of Peter Thiel’s zero-to-one v one-to-n framework: the hardest, progress-bottlenecking things usually look like creating new things, rather than scaling existing things. For example, there is very little you can do today in American politics that is as impactful or reaches as deep into the future as founding America in the first place.
In the case of AI safety: neglectedness. Everyone wants to work at a lab instead, humans are too risk averse in general, etc. (I’ve heard many people in AI safety say that neglectedness is overrated. There are arguments like this one that replaceability/neglectedness considerations aren’t that major: job performance is heavy-tailed, hiring is hard for orgs, etc. But such arguments seem like weirdly myopic parameter-fiddling, at least when the alternative is zero-to-one things like discussed above. Starting big things is in fact big. Paradigm shifts matter because they’re the frame that everything else takes place in. You either see this or you don’t.)
To the extent you think the problem is about economic incentives or differential progress, have you considered getting your hands dirty and trying to change the actual economy or the direction of the tech tree? There are many ways to do this, including some types of policy and research. But I think the AI safety scene has a cultural bias towards things that look like research or information-gathering, and away from being “builders” in the Silicon Valley sense. One of the things that Silicon Valey does get right is that being a builder is very powerful. If the AI debate comes down to a culture/influence struggle between anti-steering, e/acc-influenced builder types and pro-steering EA-influenced academic types, it doesn’t look good for the world.
You talk here about an impact/direction v ambition/profit tradeoff. I’ve heard many other people talking about this tradeoff too. I think it’s overrated; in particular, if you’re constantly having to think about it, that’s a bad sign.
It’s rare that you have a continuous space of options between lots of impact and low profit, and low/negative impact and high profit.
If you do have such a continuous space of options then I think you are often just screwed and profit incentives will win.
The really important decision you make is probably a discrete choice: do you start an org trying to do X, or an org trying to do Y? Usually you can’t (and even if you can, shouldn’t) try to interpolate between these things, and making this high-level strategy call will probably shape your impact more than any later finetuning of parameters within that strategy.
Often, the profit incentives point towards the more-obvious, gradient-descent-like path, which is usually very crowded and leads to many “mediocre” outcomes (e.g. starting a $10M company), but the biggest things come from doing “Something Else Which Is Not That” (as is said in dath ilan). For example, SpaceX (ridiculously hard and untested business proposition) and Facebook (started out seeing very small and niche and with no clue of where the profit was).
Instead, I think the real value of doing things that are startup-like comes from:
The zero-to-one part of Peter Thiel’s zero-to-one v one-to-n framework: the hardest, progress-bottlenecking things usually look like creating new things, rather than scaling existing things. For example, there is very little you can do today in American politics that is as impactful or reaches as deep into the future as founding America in the first place.
In the case of AI safety: neglectedness. Everyone wants to work at a lab instead, humans are too risk averse in general, etc. (I’ve heard many people in AI safety say that neglectedness is overrated. There are arguments like this one that replaceability/neglectedness considerations aren’t that major: job performance is heavy-tailed, hiring is hard for orgs, etc. But such arguments seem like weirdly myopic parameter-fiddling, at least when the alternative is zero-to-one things like discussed above. Starting big things is in fact big. Paradigm shifts matter because they’re the frame that everything else takes place in. You either see this or you don’t.)
To the extent you think the problem is about economic incentives or differential progress, have you considered getting your hands dirty and trying to change the actual economy or the direction of the tech tree? There are many ways to do this, including some types of policy and research. But I think the AI safety scene has a cultural bias towards things that look like research or information-gathering, and away from being “builders” in the Silicon Valley sense. One of the things that Silicon Valey does get right is that being a builder is very powerful. If the AI debate comes down to a culture/influence struggle between anti-steering, e/acc-influenced builder types and pro-steering EA-influenced academic types, it doesn’t look good for the world.