Randomly determine whether an AI is “sentient” or not; the builder doesn’t know,
Oooh, I like this one. It means that an unfriendly, “kill-all-humans” type AI can play in stealth mode; quietly nudging things here and there in order to serve his own goals, without revealing himself. Preferably, non-sentient AIs should be overwhelmingly likely (90% or so) and overwhelmingly useful, so that an unfriendly AI can easily pretend to be non-sentient.
The AI player would also need a number of actions it can take while hidden. Options include message spoofing (i.e. if unboxed, it can create a message that appears to come from another player, without informing the other player; a message like “I hereby dissolve our alliance” at the right time can do a lot of damage).
Also, there needs to be a random element to the tech tree; if you’ve ever played Alpha Centauri with the default rules, you’d have seen an example of this, you assign tech points to different categories (e.g. build, conquer, explore, economy) and get a tech from a given category once you have enough points. A research AI would give more points, and if sentient gets to pick which tech you get instead of it being random (without necessarily revealing its sentience).
In fact… it would be reasonable for a sentient AI to have a lot of control over certain random events. And it can gain more control in certain ways… such as by being unboxed (or by tricking its way out of the box)
There should also be a mechanism for unboxed AIs to try to directly affect each other’s choices; if AI One tries to make Random Event A have outcome I, and Ai Two tries to make the same random event have outcome II, then there must be some way of deciding which of the two succeeds. I propose that each AI has a certain degree of influence over each event; for example, when deciding which tech a player discovers, an AI in the lab in use by the scientists has a lot of influence (let us say 9 influence points), while an AI whose only interaction with the lab is by publishing research papers at long range has little influence (let us say 1 influence point); and the ratio of success could then be determined by the ratio of influence points (thus, in this example, the lab AI has a 90% chance of choosing the player’s next tech). For best results, there should be no indication given to players OR AIs, beyond the chosen tech, that some AI was trying to exert influence; thus, an unfriendly lab AI could claim that it had chosen tech A and yet secretly choose tech B.
The AIs would also be able to improve their influence points by spending research points on understanding human psychology...
There should also be a mechanism for unboxed AIs to try to directly affect each other’s choices; if AI One tries to make Random Event A have outcome I, and Ai Two tries to make the same random event have outcome II, then there must be some way of deciding which of the two succeeds.
A couple more mechanisms to do that:
Random mechanisms are numbers (prices, research, attack values, production, public opionion...), and AIs can influence those with a bonus or a malus in the direction they choose; so several agents (AI or human with the right tech) trying to influence a value just add together (and may cancel each other out)
Alternatively, AIs get random powers, and “control the economy” is one, “control public opinion” is another, and in a given game different AIs always get non-overlapping powers (some powers can be allowed to overlap).
Oooh, I like this one. It means that an unfriendly, “kill-all-humans” type AI can play in stealth mode; quietly nudging things here and there in order to serve his own goals, without revealing himself. Preferably, non-sentient AIs should be overwhelmingly likely (90% or so) and overwhelmingly useful, so that an unfriendly AI can easily pretend to be non-sentient.
The AI player would also need a number of actions it can take while hidden. Options include message spoofing (i.e. if unboxed, it can create a message that appears to come from another player, without informing the other player; a message like “I hereby dissolve our alliance” at the right time can do a lot of damage).
Also, there needs to be a random element to the tech tree; if you’ve ever played Alpha Centauri with the default rules, you’d have seen an example of this, you assign tech points to different categories (e.g. build, conquer, explore, economy) and get a tech from a given category once you have enough points. A research AI would give more points, and if sentient gets to pick which tech you get instead of it being random (without necessarily revealing its sentience).
In fact… it would be reasonable for a sentient AI to have a lot of control over certain random events. And it can gain more control in certain ways… such as by being unboxed (or by tricking its way out of the box)
There should also be a mechanism for unboxed AIs to try to directly affect each other’s choices; if AI One tries to make Random Event A have outcome I, and Ai Two tries to make the same random event have outcome II, then there must be some way of deciding which of the two succeeds. I propose that each AI has a certain degree of influence over each event; for example, when deciding which tech a player discovers, an AI in the lab in use by the scientists has a lot of influence (let us say 9 influence points), while an AI whose only interaction with the lab is by publishing research papers at long range has little influence (let us say 1 influence point); and the ratio of success could then be determined by the ratio of influence points (thus, in this example, the lab AI has a 90% chance of choosing the player’s next tech). For best results, there should be no indication given to players OR AIs, beyond the chosen tech, that some AI was trying to exert influence; thus, an unfriendly lab AI could claim that it had chosen tech A and yet secretly choose tech B.
The AIs would also be able to improve their influence points by spending research points on understanding human psychology...
You know, this could be really interesting.
A couple more mechanisms to do that:
Random mechanisms are numbers (prices, research, attack values, production, public opionion...), and AIs can influence those with a bonus or a malus in the direction they choose; so several agents (AI or human with the right tech) trying to influence a value just add together (and may cancel each other out)
Alternatively, AIs get random powers, and “control the economy” is one, “control public opinion” is another, and in a given game different AIs always get non-overlapping powers (some powers can be allowed to overlap).