Questions for an AGI project
I’ve been thinking a bit about what would cause me to support an AGI project and thought it might be interesting to others, and I’d be interested in other risks or questions.
The questions would be about discovering the projects stance on various risks. By stance I mean.
How they plan to find out information about the risk ?
What their threshold is for acting on a risk?
What they will do when they reach the threshold?
Who owns this risk and process?
The types of risks I am interested in are.
Typical unfriendly foom situation
Asymetric deployment of AI causing potential war/political problems as discussed here
Uneven deployment of AI causing massive inequality and depression as people can no longer be actors in the world or their lives.
Deployment of AI causing humanity to speed up and magnify it’s conflict and competition. Burning through it’s resources. We had massively more brain power/compute since the industrial revolution, but it can seem touch and go that we are going to get off the planet permanently even with that, will AI been any better?
So for foom, they might do things like agi estimation where you try and estimate the capability of your part of an AGI at a task. If it turns out to be vastly better than you expect or your estimation is that it will do science vastly better than humans straight out of the box, you halt and catch fire and try and do some ethics and philosophy to get a good goal straight away.
I suppose there is the risk that the AGI or IA is suffering while helping out humanity as well.