Lastly, some important opportunities are only available while we don’t yet know for sure who has power after the intelligence explosion. In principle at least, the US and China could make a binding agreement that if they “win the race” to superintelligence, they will respect the national sovereignty of the other and share in the benefits. Both parties could agree to bind themselves to such a deal in advance, because a guarantee of controlling 20% of power and resources post-superintelligence is valued more than a 20% chance of controlling 100%. However, once superintelligence has been developed, there will no longer be incentive for the ‘winner’ to share power.
Similarly for power within a country. At the moment, virtually everyone in the US might agree that no tiny group or single person should be able to grab complete control of the government. Early on, society could act unanimously to prevent that from happening. But as it becomes clearer which people might gain massive power from AI, they will do more to maintain and grow that power, and it will be too late for those restrictions.
Strong agree here, this is something governments should move quickly on: “No duh” agreements that put up some legal or societal barriers to malfeasance later.
Missions beyond the Solar System. International agreements could require that extrasolar missions should be permitted only with a high degree of international consensus. This issue isn’t a major focus of attention at the moment within space law but, perhaps for that reason, some stipulation to this effect in any new treaty might be regarded as unobjectionable.
Also a good idea. I don’t want to spend hundreds of years having to worry about the robot colony five solar systems over...
Human preference-shaping technology. Technological advances could enable us to choose and shape our own or others’ preferences, plus those of future generations. For example, with advances in neuroscience, psychology, or even brain-computer interfaces, a religious adherent could self-modify to make it much harder to change their mind about their religious beliefs (and never self-modify to undo the change). They could modify their children’s beliefs, too.
Gotta ask, was this inspired by To the Stars at all? There’s no citation, but that story is currently covering the implications of having the technology to choose/shape “preference-specifications” for yourself and for society.
Random commentary on bits of the paper I found interesting:
Under Windows of opportunity that close early:
Strong agree here, this is something governments should move quickly on: “No duh” agreements that put up some legal or societal barriers to malfeasance later.
Next, under Space Governance:
Also a good idea. I don’t want to spend hundreds of years having to worry about the robot colony five solar systems over...
Finally, under Value lock-in mechanisms:
Gotta ask, was this inspired by To the Stars at all? There’s no citation, but that story is currently covering the implications of having the technology to choose/shape “preference-specifications” for yourself and for society.