Apparently, the idea is that this sort of game tells us something useful about AI safety.
The game was designed under the assumption that when humans create a superhuman intelligence capable of destroying humanity if it chooses so, they would hesitate to unleash it on the world. Metaphorically speaking, they will keep the prototype locked in a box (i.e. isolated from the internet, other humans, robotic bodies, factories that could produce robotic bodies, etc.), until they are somehow convinced that the intelligence is not going to kill us (presumably by figuring out a way to prove that mathematically from its source code).
This assumption seems a little silly in retrospect. Of course when a company creates a potentially omnicidal artificial intelligence, they first thing they will do is connect it to the internet, and the second thing they will do is integrate it with all kinds of stuff that already exists (e-mails, calendars, household devices, self-driving cars, drones, nuclear weapons). How else are they going to pay for the costs of research?
The game was designed under the assumption that when humans create a superhuman intelligence capable of destroying humanity if it chooses so, they would hesitate to unleash it on the world. Metaphorically speaking, they will keep the prototype locked in a box (i.e. isolated from the internet, other humans, robotic bodies, factories that could produce robotic bodies, etc.), until they are somehow convinced that the intelligence is not going to kill us (presumably by figuring out a way to prove that mathematically from its source code).
This assumption seems a little silly in retrospect. Of course when a company creates a potentially omnicidal artificial intelligence, they first thing they will do is connect it to the internet, and the second thing they will do is integrate it with all kinds of stuff that already exists (e-mails, calendars, household devices, self-driving cars, drones, nuclear weapons). How else are they going to pay for the costs of research?