Another fairly specific route to impact: several major AI research labs would likely act on suggestions for coordinating to make AI safer, if we had any. Right now I don’t think we do, and so research into that could have a big multiplier.
Strongly agreed. I think that how major AI actors (primarily firms) govern their AI projects and interact with each other is a difficult problem, and providing advice to such actors is the sort of thing that I’d expect to be a positive black swan.
Strongly agreed. I think that how major AI actors (primarily firms) govern their AI projects and interact with each other is a difficult problem, and providing advice to such actors is the sort of thing that I’d expect to be a positive black swan.