Without wishing to discourage these efforts, I disagree on a few points here:
Still, the biggest opportunities are often the ones with the lowest probability of success, and startups are the best structures to capitalize on them.
If I’m looking for the best expected value around, that’s still monotonic in the probability of success! There are good reasons to think that most organizations are risk-averse (relative to the neutrality of linear $=utils) and startups can be a good way to get around this.
Nonetheless, I remain concerned about regressional Goodhart; and that many founders naively take on the risk appetite of funders who manage a portfolio, without the corresponding diversification (if all your eggs are in one basket, watch that basket very closely). See also Inadequate Equilibria and maybe Fooled by Randomness.
Meanwhile, strongly agreed that AI safety driven startups should be B corps, especially if they’re raising money.
Technical quibble; “B Corp” is a voluntary private certification; PBC is a corporate form which imposes legal obligations on directors. I think many of the B Corp criteria are praiseworthy, but this is neither necessary nor sufficient as an alternative to PBC status—and getting certified is probably a poor use of time and attention for a startup when the founders’ time and attention are at such a premium.
Thanks, appreciate your wanting these efforts not discouraged!
I agree there’s certainly a danger of AI safety startups optimizing for what will appeal to investors (not just with risk appetite but in many other dangerous ways too) and Goodharting rather than focusing purely on the most impactful work.
VCs themselves tend not to think as long-term as they should (even for their own economic interests), but I’m hopeful we can build an ecosystem around AI safety where they do more. Likely, the investors interested in AI safety will be inclined to think more long-term. The few early AI safety investors that exist today certainly are.
I do think it’s crucial (and possible!) for founders in this space to be very thoughtful about their true long-term goals and incentives around alignment and to build the right structures around AI safety for-profit funding.
On your diversification point, for example, a windfall trust-like thing for all AI safety startups to share in the value each other create could make a lot of sense considering just a very tiny bit of equity in the biggest winners may be quickly larger than our entire economy today.
Also, inadequate equilibria are too bad, yeah, but inadequate equilibria apply to all orgs, not just startups. We pointed out in the post above
We think that as AI development and mainstream concern increase, there’s going to be a significant increase in safety-washing and incentives pushing the ecosystem from challenging necessary work towards pretending to solve problems. We think the way to win that conflict is by showing up, rather than lamenting other people’s incentives. This problem isn’t limited to business relationships; safety-washing is a known problem with nonprofits, government regulations, popular opinion, and so on. Every decision-maker is beholden to their stakeholders, and so decision quality is driven by stakeholder quality.
In fact, startups can be a powerful antidote to inadequate equilibria. I think often the biggest opportunities for startups are actually solving inadequate equilibria, especially leveraging technology shifts/innovations, like electric cars. Ideal new structures to facilitate and govern maximal AI safety innovation would help fast-track solutions around these inadequate equilibria. In contrast, established systems are more prone to yielding inadequate equilibria due to their resistance to change.
I also think we may be underestimating how much people may come together to try to solve these problems as they increasingly come to take them seriously. Today at LessOnline, an interesting discussion I heard was about how surprised AI safety people are that the general public seems so naturally concerned about AI safety upon hearing about it.
This makes me hopeful we can create startups and new structures that help address inadequate equilibria and solve AI safety, and I think we ought to try.
Without wishing to discourage these efforts, I disagree on a few points here:
If I’m looking for the best expected value around, that’s still monotonic in the probability of success! There are good reasons to think that most organizations are risk-averse (relative to the neutrality of linear $=utils) and startups can be a good way to get around this.
Nonetheless, I remain concerned about regressional Goodhart; and that many founders naively take on the risk appetite of funders who manage a portfolio, without the corresponding diversification (if all your eggs are in one basket, watch that basket very closely). See also Inadequate Equilibria and maybe Fooled by Randomness.
Technical quibble; “B Corp” is a voluntary private certification; PBC is a corporate form which imposes legal obligations on directors. I think many of the B Corp criteria are praiseworthy, but this is neither necessary nor sufficient as an alternative to PBC status—and getting certified is probably a poor use of time and attention for a startup when the founders’ time and attention are at such a premium.
Thanks, appreciate your wanting these efforts not discouraged!
I agree there’s certainly a danger of AI safety startups optimizing for what will appeal to investors (not just with risk appetite but in many other dangerous ways too) and Goodharting rather than focusing purely on the most impactful work.
VCs themselves tend not to think as long-term as they should (even for their own economic interests), but I’m hopeful we can build an ecosystem around AI safety where they do more. Likely, the investors interested in AI safety will be inclined to think more long-term. The few early AI safety investors that exist today certainly are.
I do think it’s crucial (and possible!) for founders in this space to be very thoughtful about their true long-term goals and incentives around alignment and to build the right structures around AI safety for-profit funding.
On your diversification point, for example, a windfall trust-like thing for all AI safety startups to share in the value each other create could make a lot of sense considering just a very tiny bit of equity in the biggest winners may be quickly larger than our entire economy today.
Also, inadequate equilibria are too bad, yeah, but inadequate equilibria apply to all orgs, not just startups. We pointed out in the post above
In fact, startups can be a powerful antidote to inadequate equilibria. I think often the biggest opportunities for startups are actually solving inadequate equilibria, especially leveraging technology shifts/innovations, like electric cars. Ideal new structures to facilitate and govern maximal AI safety innovation would help fast-track solutions around these inadequate equilibria. In contrast, established systems are more prone to yielding inadequate equilibria due to their resistance to change.
I also think we may be underestimating how much people may come together to try to solve these problems as they increasingly come to take them seriously. Today at LessOnline, an interesting discussion I heard was about how surprised AI safety people are that the general public seems so naturally concerned about AI safety upon hearing about it.
This makes me hopeful we can create startups and new structures that help address inadequate equilibria and solve AI safety, and I think we ought to try.