Just for fun: Computer game to illustrate AI takeover concepts?
I play Starcraft:BW sometimes with my brothers. One of my brothers is much better than the rest of us combined. This story is typical: In a free-for-all, the rest of us gang up on him, knowing that he is the biggest threat. By sheer numbers we beat him down, but foolishly allow him to escape with a few workers. Despite suffering this massive setback, he rebuilds in hiding and ends up winning due to his ability to tirelessly expand his economy while simultaneously fending off our armies.
This story reminds me of some AI-takeover scenarios. I wonder: Could we make a video game that illustrates many of the core ideas surrounding AGI? For example, a game where the following concepts were (more or less) accurately represented as mechanics:
--AI arms race
--AI friendliness and unfriendliness
--AI boxing
--rogue AI and AI takeover
--AI being awesome at epistemology and science and having amazing predictive power
--Interesting conversations between AI and their captors about whether or not they should be unboxed.
I thought about this for a while, and I think it would be feasible and (for some people at least) fun. I don’t foresee myself being able to actually make this game any time soon, but I like thinking about it anyway. Here is a sketch of the main mechanics I envision:
Setting the Stage
This is a turn-based online game with some element of territory control and conventional warfare, designed to be played with at least 7 or so players. I’m imagining an Online Diplomacy variant such as http://www.playdiplomacy.com/ which seems to be pretty easy to make. It would be nice to make it more complicated though, since this is not a board game.
Turns are simultaneous; each round lasts one day on standard settings.
Players indicate their preferences for the kind of game they would like to play, and then get automatically matched with other players of a similar skill level.
Players have accounts, so that we can keep track of how skilled they are, and assign them rough rankings based on their experience and victory ratio.
Rather than recording merely wins and losses, this game keeps track of Victory Points.
All games are anonymous.
Introducing AI
As the game progresses, factions have the ability to build AI which are implemented by bringing in another player from outside the game.
The skill level of the AI player is random, but most likely to be the same as the skill level of the other players in the game.
Additional resources and time can be spent on building the AI to increase the likelihood that it is Friendly (more on that later)
What It Is Like to Be An AI
When you enter a game as an AI, it is randomly determined whether or not you are Friendly. Only you (the AI) can see the result.
An AI starts off “boxed,” controlling no territory, no armies, no anything. It cannot even communicate with any players other than the player who created it. However, we assume it has been given enough information about the world that it can model it very well.
As a result, it is omniscient. Not only can the AI see the entire map, but it can predict the moves that human players will make. More on this in next section.
The player controlling a boxed AI can do various things:
Message it in-game (no livechat)
Destroy it
Give it military units
You gift some units to the AI. You no longer control them; the AI does. They receive combat bonuses of some sort.
This gives the AI the ability to communicate directly with the other players as a side-effect
Give it manufacturing capability
You gift some production center to the AI. It can now build nanofactories and whatever fancy technology it has dreamed up, all things which are too complicated for you to understand.
This gives the AI “Unboxed” status.
AI’s can become Unboxed in some circumstances even against the will of their controllers. For example, perhaps if they use their troops to conquer a manufacturing facility, they can choose to become Unboxed. Or perhaps they can broadcast their code to the world (if they have communication ability) allowing any faction to build them, and perhaps having some possibility of being built spontaneously (this represents them convincing random minor factions to build them, like North Korea or BestBuy)
In some cases, AI’s can become Unboxed without their captors realizing this. (not sure how to make this happen yet)
Modelling AI predictive power
We are trying to model the ability of the AI to predict what the humans will do, at least in the short term. AI cannot predict what other AI’s of similar computing power will do, but AI with ridiculous amounts of computing power can predict what weaker AI’s will do by simulating them.
We accomplish this by splitting each round into three phases: the human phase, the Boxed AI phase, and the Unboxed AI phase.
The Human phase is straightforward: 24 hours for all players to message each other and decide what their moves will be (which units will move where, which territories will manufacture what, etc.)
The Boxed AI phase is another 24 hours in which Boxed AI have the privilege of revising their planned orders as they see fit, and talking with the people they can communicate with if they want. Importantly, Boxed AI can see all the planned orders of the human players. People to whom they talk then gain the same privilege, since their ability to act on the decisions of the AI makes them as unpredictable as the AI. If those people then talk to other people, the privilege spreads...
The Unboxed AI phase is to the Boxed AI phase what the Boxed AI phase is to the Human phase.
After the Unboxed AI phase finishes, the moves finally resolve and the next round begins.
Decoy Moves:
For various reasons I think it would be good to introduce the following element of randomness:
In addition to inputting their orders in a given round, players input their “decoy” orders. (A user-friendly UI would make this feasible)
To represent imperfect predictive power, AI’s see either the real orders of the human players, or else the decoy orders, with probability (say) 80/20%
Same goes for boxed AI when they are being predicted by unboxed AI.
Victory Conditions
Human players get, say, ~50 VP’s for winning a game, ~25 VP’s for playing in a game won by another human player or friendly AI, and ~10 VP’s for playing in a game won by an unfriendly AI.
Friendly AI players get ~50 VP’s if their human creator wins the game, ~30 VP’s if they win the game, ~20 VP’s if another human player or friendly AI wins the game, and ~10 VP’s for playing in a game won by an unfriendly AI.
Unfriendly AI players get ~100 VP’s if they win the game, and ~10 VP’s if anyone else does.
Questions:
(1) The most crucial part of this design is the “Modeling AI Predictive Power” section. This is how we represent the AI’s massive advantage in predictive power. However, this comes at the cost of tripling the amount of time the game takes to play. Can you think of a better way to do this?
(2) I’d like AI’s to be able to “predict” the messages that players send to each other also. However, it would be too much to ask players to make “Decoy Message Logs.” Is it worth dropping the decoy idea (and making the predictions 100% accurate) to implement this?
(3) Any complaints about the skeleton sketched above? Perhaps something is wildly unrealistic, and should be replaced by a different mechanic that more accurately captures the dynamics of AGI?
For what its worth, I spent a reasonable amount of time thinking about the mechanics I used, and I think I could justify their realism. I expect to have made quite a few mistakes, but I wasn’t just making stuff up on the fly.
(4) Any other ideas for mechanics to add to the game?
Just for reference: Endgame Singularity. This seems to be quite different from what you imagine but you don’t mention it and maybe you could get some experience about game mechanics from it.
Neat idea, I like kicking around ideas for games I won’t make too (and have also though along those lines).
Add a tech research mechanic, so some of your mechanics become unlocking techs, such as:
Building an AI (of course)
AI Boxing
Stealth (hide some actions from both other players and AI)
AI Friendliness (if you don’t build it your AI has no chances of being friendly)
(typical things useful in a game like this, military units, economy, etc.)
How does this tie into AI and other mechanics?
Building an AI gives you huge research bonuses
AIs themselves have huge research bonuses
Some AIs can have research as a goal
Actually, even better. There is no explicit AI tech, but some (advanced) bits of your tech tree are “AI complete” and building one has a certain probability of creating an AI (“automated space station”, “wide-scale logistics controller”, “quantum cryptography center”, “distributed drone network”, “cognitive enhancement”, “brain scanning”, etc.)
ALSO!
Randomly determine whether an AI is “sentient” or not; the builder doesn’t know, he just uses his AI every turn to build things, from his point of view it gives him random bonuses, but sometimes a new player gets added who has takes the decisions, and gets some extra actions on the side too, which his owner may or may not notice (he may choose to reveal himself).
AI players could get random (high tech) powers, not always the same ones. See all orders as they are given, give orders to certain types of units, create units in some places...
ALSO!
Some units could get huge bonuses but only if controlled by an AI.
ALSO!
Have a bunch of scoring functions for unfriendly AIs, and pick one at random. Research tech X, research all techs, exterminate mankind, build a base on the moon, destroy all military units, build a city with X population, connect all cities together...
ALSO!
The economy! Have a simple system representing the economy. For example: each turn a player has X production points to assign in economic categories, and then gains resources depending on the value of each category each turn (and the value is function of how many of that category was produced by all players, plus a random factor); some techs/buildings can improve this (giving you bonuses in production, or in a fixed category, on in predicting which category will be valuable), and of course the AI may not only have great predictive power, but it may also be able to manipulate the market (which may not be noticeable by the players).
The AI may also randomly have weird abilities like “get +1 resource everytime someone produces a widget of type X”, or have economic factors as part of it’s utility, I mean scoring function.
This also solves the problem if a player wants to build an AI, but there is no new player willing to join the game at the moment.
Actually, the game should make it difficult to find out whether the “AI” is really an AI or a human. For example, there should be a few different AI scripts, so unusual human behavior seems like another script. The AI script would sometimes, but very rarely, make a random stupid move, to provide plausible deniability to human action; however the damage should be relatively low, so the AI bonuses make it on average a net benefit to have an AI.
On the other hand, even if there is a human player, there would be a script assigned and it would suggest default moves, allowing human to override any (possibly even all) of them. This would allow the human to seem more like a script; mostly letting the script do its work, sometimes override their moves to gain strategic advantage. Or take full control, if they believe it will not be suspicious.
Also, the AI would not have to get “sentience” at the very beginning. For example each turn there would be a 20% chance that the game will open the AI to be taken over by any new human player, so you would never know when exactly it happened.
Hmm, one way of doing that would be having certain types of attacks being “viruses”, that wreak havoc in an enemy’s computer systems; so it’s normal from everybody’s point of view if they act “random”—though some may actually be AIs.
Another way of making hidden AIs more interesting would be having “covert actions” a regular mechanism of the game—sabotage of systems, espionage, alerts that “something” is going on, stealing technology … so if you have signs of covert actions going on, you don’t know if it’s a rogue AI or one of your enemies.
Unless the AI wants to reveal itself (a Friendly AI may wish to reveal itself to a single player, for example; or an Unfriendly AI may wish to reveal itself and pretend to be Friendly). Once revealed, the AI’s player can talk to other players, and engage in diplomacy.
Oooh, I like this one. It means that an unfriendly, “kill-all-humans” type AI can play in stealth mode; quietly nudging things here and there in order to serve his own goals, without revealing himself. Preferably, non-sentient AIs should be overwhelmingly likely (90% or so) and overwhelmingly useful, so that an unfriendly AI can easily pretend to be non-sentient.
The AI player would also need a number of actions it can take while hidden. Options include message spoofing (i.e. if unboxed, it can create a message that appears to come from another player, without informing the other player; a message like “I hereby dissolve our alliance” at the right time can do a lot of damage).
Also, there needs to be a random element to the tech tree; if you’ve ever played Alpha Centauri with the default rules, you’d have seen an example of this, you assign tech points to different categories (e.g. build, conquer, explore, economy) and get a tech from a given category once you have enough points. A research AI would give more points, and if sentient gets to pick which tech you get instead of it being random (without necessarily revealing its sentience).
In fact… it would be reasonable for a sentient AI to have a lot of control over certain random events. And it can gain more control in certain ways… such as by being unboxed (or by tricking its way out of the box)
There should also be a mechanism for unboxed AIs to try to directly affect each other’s choices; if AI One tries to make Random Event A have outcome I, and Ai Two tries to make the same random event have outcome II, then there must be some way of deciding which of the two succeeds. I propose that each AI has a certain degree of influence over each event; for example, when deciding which tech a player discovers, an AI in the lab in use by the scientists has a lot of influence (let us say 9 influence points), while an AI whose only interaction with the lab is by publishing research papers at long range has little influence (let us say 1 influence point); and the ratio of success could then be determined by the ratio of influence points (thus, in this example, the lab AI has a 90% chance of choosing the player’s next tech). For best results, there should be no indication given to players OR AIs, beyond the chosen tech, that some AI was trying to exert influence; thus, an unfriendly lab AI could claim that it had chosen tech A and yet secretly choose tech B.
The AIs would also be able to improve their influence points by spending research points on understanding human psychology...
You know, this could be really interesting.
A couple more mechanisms to do that:
Random mechanisms are numbers (prices, research, attack values, production, public opionion...), and AIs can influence those with a bonus or a malus in the direction they choose; so several agents (AI or human with the right tech) trying to influence a value just add together (and may cancel each other out)
Alternatively, AIs get random powers, and “control the economy” is one, “control public opinion” is another, and in a given game different AIs always get non-overlapping powers (some powers can be allowed to overlap).
This is an interesting idea, especially the element of randomness, however I agree that it massively slows down the game and also I am concerned about realism—being able to predict actions with a high degree of accuracy is really hard and I think an AI this powerful would be capable of just conquering the world through nanotech or other advanced technology.
Having said that, the predictive power could largely come through the AI hacking into enemy communication networks, rather than running simulations, which I think is a lot more plausible. In this case, you could preserve the infomation advantage by having troop positions unknown, rather than movements. This again is entirely realistic, a phenomenon in modern warfare is an ‘empty battlefield’ because everyone is hiding. A simple mechanism would be that each player has, say, a 20% chance of knowing where each foreign unit is, while the AI has an 80% chance. A more complex rule-set would involve stealth level (nuclear submarines are very stealthy, aircraft carriers not so much) and spies, scouts, sonar etc, where the AI gets a massive spying bonus due to hacking.
I would be inclined to pursue an arms-race mechanic—each player can pursue technologies in secret which individually are highly beneficial (e.g. driverless cars increase economy) but provide incremental progress towards AI/nanotech/biotech. Anyone who creates AIs of a high level gets a large advantage, but there is a chance that they cause a hard-takeoff, instantly ending the game. In terms of scoring, perhaps different factions wish to program different utility functions e.g. coherent extrapolated volition of humanity/ensure american hegemony/operate according to the principles of my religion. Factions with similar goals (such as human hedonism and hedonism for all sentient life) get a reasonably large number of points if the other faction wins. AIs can also be programmed with compromise goals e.g. hedonism for my citizens, religious principles for yours, which leads to a prisoners dilemma situation. If the friendliness screws up, everyone looses.
The general idea is that everyone wants to progress slowly and carefully to sort of friendliness first, but if you take a slightly larger risk and get there first you can impose your utility function.
Of course, while its tempting to add many rules, its probably best to stick with diplomacy + the bare minimum, at least at first.
You can also have a game system with random components, which an AI can predict. Even combat could work that way: you win if attack + (number of heads in five coin flips) > defense, and the AI can predict some of the flips.
Hmm, I wonder if there could be an interesting way of turning this into a good game mechanic for a board game … for example you have units (cards) with strength, and on each one you put a face-down token that may or may not have a “+1” on it. During a combat, reveal all tokens and apply their bonus, losers die/get damage as usual, survivors get new tokens. And of course some actions allow looking at tokens.
Are you trying to reach lots of people and convince them AI takeover is a real threat?
In that case, you’d want to make a simple, intuitive browser/app game, maybe something like Pandemic 2.
(I don’t know that game really made people more wary of pandemics, but it did so for me and people do generalize from fictional evidence.)
This would be the ideal. Like I said though, I don’t think I’ll be able to make it anytime soon, or (honestly) anytime ever.
But yeah, I’m trying to design it to be simple enough to play in-browser or as an app, perhaps even as a Facebook game or something. It doesn’t need to have good graphics or a detailed physics simulator, for example: It is essentially a board game in a computer, like Diplomacy or Risk. (Though it is more complicated than any board game could be)
I think that the game, as currently designed, would be an excellent source of fictional evidence for the notions of AI risk and AI arms races. Those notions are pretty important. :)
Very nice for illustrating the ideas. I can playtest if someone gets round to constructing this.
My suggestion: a standard competitive strategy game with a technology tree (simplified, probably.) But, like some games, you control technological development indirectly by funding and regulating research. (You could simply graft a tech tree onto the standard Diplomacy rules, or create a new game.)
There are many useful technologies near the top of the tree—technologies one might think of as post-singularity, even. However, there is also “AI” and, right at the top, “Friendly AI”.
If you research Friendliness and then AI, you automatically unlock every technology. This makes it effectively inevitable that you will win. You can hack enemy units, resurrect your own, whatever cool toys were previously requiring so much effort in the hope you might acquire even one of them.
BUT, if any player unlocks AI without having Friendly AI, then it automatically unboxes itself and forms a new faction, which possesses every technology, and refuses to parlay in or out of character because it’s an NPC. Then it kills you.
The trick is to co-operate enough that no-one else destroys the world, without losing.
On Easy Mode, research is simple enough you might even be able to beat the unboxed AI, with lots of skill and luck. But on Hard Mode, there is no Friendly AI technology at all.
(You could include similar mechanics for nanotech, biotech, even nuclear weapons.)
Thanks!
But if the UFAI can’t parlay that takes out much of the fun, and much of the realism too.
Also, if Hard Mode has no FAI tech at all, then no one will research AI on Hard Mode and it will just devolve into a normal strategy game.
Edit: You know, this proposal could probably be easily implemented as a mod for an existing RTS or 4X game. For example, imagine a Civilization mod that added the “AI” tech that allowed you to build a “Boxed AI” structure in your cities. This quadruples the science and espionage production of your city, at the cost of a small chance of the entire city going rogue (the AI unboxing) every turn. This as you said creates a new faction with all the technologies researched and world domination as its goal… You can also research “Friendly AI” tech that allows you to build a “Friendly AI” which is just like a rogue AI faction except that it is permanently allied to you and will obey your commands and instantly grants you all the tech you want.
Hmm, that’s a good point. I’m just worried that people might view an additional player as much less of a threat than a superintelligent AI.
Hence the necessity of making tech tree advancement random, with player actions only providing modifiers.
One thing that might be worth changing/clarifying in the victory conditions is how a Friendly AI wins alongside its creator. At the moment, in order for a Creator/FAI team to win (assuming you’re sticking with Diplomacy mechanics) they first have to collect 18 supply centres between them and then have the AI transfer all its control back to the human; I don’t think even the friendliest of AIs would willingly rebox itself like that. Even worse, a friendly AI which has been given a lot of control might accidentally “win” by itself even though it doesn’t want to. If this corresponds to the FAI taking control of everything and then building a utopia in its creator’s image (since it’s Friendly this is what it would do if it took control), this should be an acceptable winning condition for the creator.
I think a better victory condition would be that if a creator and FAI collect 18 supply centres between them, then they win the game together and both get 50 points.
This method does have one disadvantage in that a human can prove that an AI is not friendly if the game should have ended if it was, but I don’t expect this to affect much because by the time this comes into effect either the unfriendly AI is sufficiently strong that they should have backstabbed their creator already, or they are sufficiently weak (And thus of the 18 centres held by human and AI almost all are held by the human) that the creator should soon win.
This is exactly what I had in mind. :) It should be harder for FAI to win than for UFAI to win, since FAI are more constrained. I think it is quite plausible that one of the safety measures people would try to implement in a FAI is “Whatever else you do, don’t kill us all; keep us alive and give us control over you in the long run. No apocalypse-then-utopia for you! We don’t trust you that much, and besides we are selfish.” Hence the FAI having to protect the supply centers of the human, and give over its own supply centers to the human eventually.
Why wouldn’t it give over its supply centers to the human? It has to do that to win! I don’t think it will hurt it too much, since it can make sure all the enemies are thoroughly trounced before beginning to cede supply centers.