I don’t think that’s actually true at all; Anthropic was explicitly a scaling lab when made, for example, and Deepmind does not seem like it was “an attempt to found an ai safety org”.
It is the case that Anthropic/OAI/Deepmind did feature AI Safety people supporting the org, and the motivation behind the orgs is indeed safety, but the people involved did know that they were also going to build SOTA AI models.
Hi there, thanks for bringing this up. There are a few ways we’re planning to reduce the risk of us incubating orgs that end up fast-tracking capabilities research over safety research.
Firstly, we want to select for a strong impact focus & value-alignment in participants.
Secondly, we want to assist the founders to set up their organization in a way that limits the potential for value drift (e.g. a charter for the forming organization that would legally make this more difficult, choosing the right legal structure, and helping them with vetting or suggestions for who you can best take on as an investor or board member)
If you have additional ideas around this we’d be happy to hear them.
All the leading AI labs so far seem to have come from attempts to found AI safety orgs. Do you have a plan against that failure case?
I don’t think that’s actually true at all; Anthropic was explicitly a scaling lab when made, for example, and Deepmind does not seem like it was “an attempt to found an ai safety org”.
It is the case that Anthropic/OAI/Deepmind did feature AI Safety people supporting the org, and the motivation behind the orgs is indeed safety, but the people involved did know that they were also going to build SOTA AI models.
Hi there, thanks for bringing this up. There are a few ways we’re planning to reduce the risk of us incubating orgs that end up fast-tracking capabilities research over safety research.
Firstly, we want to select for a strong impact focus & value-alignment in participants.
Secondly, we want to assist the founders to set up their organization in a way that limits the potential for value drift (e.g. a charter for the forming organization that would legally make this more difficult, choosing the right legal structure, and helping them with vetting or suggestions for who you can best take on as an investor or board member)
If you have additional ideas around this we’d be happy to hear them.
Retain an option to buy the org later for a billion dollars, reducing their incentive to become worth more than a billion dollars.