I’m not sure if the thing you want solutions for is how to build AI, or reduce existential risk in general.
If the second,
Yes. Getting people into space comes to mind.
If the first,
The cost of a failed startup/policy vs. the cost of a failed AGI are different.
A failed startup/policy loses/wastes money. An unfriendly AI probably murders us all while it takes over this part of the universe (though, it could just destroy everything we value). If just one does that, then there’s not much we can do to recover, so starting 30 AGI projects and hoping one turns out friendly is a bad idea.
I’m not sure if the thing you want solutions for is how to build AI, or reduce existential risk in general.
If the second, Yes. Getting people into space comes to mind.
If the first, The cost of a failed startup/policy vs. the cost of a failed AGI are different.
A failed startup/policy loses/wastes money. An unfriendly AI probably murders us all while it takes over this part of the universe (though, it could just destroy everything we value). If just one does that, then there’s not much we can do to recover, so starting 30 AGI projects and hoping one turns out friendly is a bad idea.
This could make the whole 30-projects-hope-one-comes-out-right thing work, although there are some problems.
An unfriendly AI probably gets turned off. The problem is it might take over the universe.