Does it make sense to plan for one possible world or do you think that the other possible worlds are being adequately planned for and it is only the fast unilateral take off that is neglected currently?
Limiting AI to operating in space makes sense. You might want to pay off or compensate all space launch capability in some way as there would likely be less need.
Some recompense for the people who paused working on AI or were otherwise hurt in the build up to AI makes sense.
Also trying to communicate ahead of time what a utopic vision of AI and humans might look like, so the cognitive stress isn’t too major is probably a good idea to commit to.
Committing to support multilateral acts if unilateral acts fail is probably a good idea too. Perhaps even partnering with a multilateral effort so that effort on shared goals can be spread around?
Does it make sense to plan for one possible world or do you think that the other possible worlds are being adequately planned for and it is only the fast unilateral take off that is neglected currently?
Limiting AI to operating in space makes sense. You might want to pay off or compensate all space launch capability in some way as there would likely be less need.
Some recompense for the people who paused working on AI or were otherwise hurt in the build up to AI makes sense.
Also trying to communicate ahead of time what a utopic vision of AI and humans might look like, so the cognitive stress isn’t too major is probably a good idea to commit to.
Committing to support multilateral acts if unilateral acts fail is probably a good idea too. Perhaps even partnering with a multilateral effort so that effort on shared goals can be spread around?