I am now writing an article where I explore this type of solutions.
One similar to ones you listed is to sell AI Safety as a service, so any other team could hire AI Safety engineers to help align their AI (basically it is a way to combine the tool and the way to deliver it.)
Another (I don’t say it is the best, but possible) is to create as many AI teams in the world as possible, so hard-takeoff will always happen in several teams, and the world will be separated on several domains. Simple calculation shows that we need around 1000 AI teams running simultaneously, to get many fooms. In fact actual number of AI startups, research groups and powerful individual is around 1000 now and growing.
There are also some other ideas, hope to publish draft here on LW next month.
I am now writing an article where I explore this type of solutions.
One similar to ones you listed is to sell AI Safety as a service, so any other team could hire AI Safety engineers to help align their AI (basically it is a way to combine the tool and the way to deliver it.)
Another (I don’t say it is the best, but possible) is to create as many AI teams in the world as possible, so hard-takeoff will always happen in several teams, and the world will be separated on several domains. Simple calculation shows that we need around 1000 AI teams running simultaneously, to get many fooms. In fact actual number of AI startups, research groups and powerful individual is around 1000 now and growing.
There are also some other ideas, hope to publish draft here on LW next month.