Why don’t we just, like, try and build safe AGI?
Why not? Currently, the AGI efforts in the world seem to be like, openAI and deepmind. None of them have as their primary aim of building safe AGI.
Why don’t we just get a few billion dollars together as a community, put everyone really smart somewhere, and just, go for it? Before openAI or Deepmind does. Sure, lots to research happening in AI safety, but it doesn’t seem like any of this research will find it’s way to openAI or Deepmind, anyway. Seems unlikely with incentives as misaligned as they are.
So, creating an organization with its primary aim to build safe AGI, and no other aims, at all, fully and only funded by EA money seems at least worth the attempt given how much money this community should have soon.
I have a narrower question. Why don’t you, personally, make this happen?
The first two sentences of the OpenAI charter are as follows:
Lots of the research is already happening there. And even if you don’t like that research, both orgs are super socially connected to the LW and EA communities, so the there’s a pretty plausible path for work done elsewhere to find its way to them.
There’s are no billion-dollar organizations that have one primary aim and no other defacto aims. Once you get that much money and different people with their own interests involved, organizational alignment is usually not perfect.