So you’re OK with the FAI not interfering if they want to kill them for the “right” reasons?
I wouldn’t like it. But if the alternative is, for example, to have FAI directly enforce the values of the minority on the majority (or vice versa) - the values that would make them kill in order to satisfy/prevent—then I prefer FAI not interfering.
“if we kill them, we will benefit by dividing their resources among ourselves”
If the resources are so scarce that dividing them is so important that even CEV-s agree on the necessity of killing, then again, I prefer humans to decide who gets them.
So you’re saying your version of CEV will forcibly update everyone’s beliefs
No. CEV does not updates anyone’s beliefs. It is calculated by extrapolating values in the presence of full knowledge and sufficient intelligence.
If the original person effectively assigns 0 or 1 “non-updateable probability” to some belief, or honestly doesn’t believe in objective reality, or believes in “subjective truth” of some kind, CEV is not necessarily going to “cure” them of it—especially not by force.
As I said elsewhere, if a person’s beliefs are THAT incompatible with truth, I’m ok with ignoring their volition. Note, that their CEV is undefined in this case. But I don’t believe there exist such people (excluding totally insane).
That there exists a possible compromise that is better than total defeat doesn’t mean total victory wouldn’t be much better than any compromise.
But the total loss would be correspondingly worse. PD reasoning says you should cooperate (assuming cooperation is precommittable).
If you think so you must have evidence relating to how to actually solve this problem. Otherwise they’d both look equally mysterious. So, what’s your idea?
Off the top of my head, adoption of total transparency for everybody of all governmental and military matters.
If the resources are so scarce that dividing them is so important that even CEV-s agree on the necessity of killing, then again, I prefer humans to decide who gets them.
The resources are not scarce at all. But, there’s no consensus of CEVs. The CEVs of 80% want to kill the rest. The CEVs of 20% obviously don’t want to be killed. Because there’s no consensus, your version of CEV would not interfere, and the 80% would be free to kill the 20%.
No. CEV does not updates anyone’s beliefs. It is calculated by extrapolating values in the presence of full knowledge and sufficient intelligence.
I meant that the AI that implements your version of CEV would forcibly update people’s actual beliefs to match what it CEV-extrapolated for them. Sorry for the confusion.
As I said elsewhere, if a person’s beliefs are THAT incompatible with truth, I’m ok with ignoring their volition. Note, that their CEV is undefined in this case. But I don’t believe there exist such people (excluding totally insane).
A case could be made that many millions of religious “true believers” have un-updatable 0⁄1 probabilities. And so on.
Your solution is to not give them a voice in the CEV at all. Which is great for the rest of us—our CEV will include some presumably reduced term for their welfare, but they don’t get to vote on things. This is something I would certainly support in a FAI (regardless of CEV), just as I would support using CEV or even CEV to CEV.
The only difference between us then is that I estimate there to be many such people. If you believed there were many such people, would you modify your solution, or is ignoring them however many they are fine by you?
PD reasoning says you should cooperate (assuming cooperation is precommittable).
As I said before, this reasoning is inapplicable, because this situation is nothing like a PD.
The PD reasoning to cooperate only applies in case of iterated PD, whereas creating a singleton AI is a single game.
Unlike PD, the payoffs are different between players, and players are not sure of each other’s payoffs in each scenario. (E.g., minor/weak players are more likely to cooperate than big ones that are more likely to succeed if they defect.)
The game is not instantaneous, so players can change their strategy based on how other players play. When they do so they can transfer value gained by themselves or by other players (e.g. join research alliance 1, learn its research secrets, then defect and sell the secrets to alliance 2).
It is possible to form alliances, which gain by “defecting” as a group. In PD, players cannot discuss alliances or trade other values to form them before choosing how to play.
There are other games going on between players, so they already have knowledge and opinions and prejudices about each other, and desires to cooperate with certain players and not others. Certain alliances will form naturally, others won’t.
adoption of total transparency for everybody of all governmental and military matters.
This counts as very weak evidence because it proves it’s at least possible to achieve this, yes. (If all players very intensively inspect all other players to make sure a secret project isn’t being hidden anywhere—they’d have to recruit a big chunk of the workforce just to watch over all the rest.)
But the probability of this happening in the real world, between all players, as they scramble to be the first to build an apocalyptic new weapon, is so small it’s not even worth discussion time. (Unless you disagree, of course.) I’m not convinced by this that it’s an easier problem to solve than that of building AGI or FAI or CEV.
The resources are not scarce at all. But, there’s no consensus of CEVs. The CEVs of 80% want to kill the rest.
The resources are not scarce, yet the CEV-s want to kill? Why?
I meant that the AI that implements your version of CEV would forcibly update people’s actual beliefs to match what it CEV-extrapolated for them.
It would do so only if everybody’s CEV-s agree that updating these people’s beliefs is a good thing.
If you believed there were many such people, would you modify your solution, or is ignoring them however many they are fine by you?
People that would still have false factual beliefs no matter how much evidence and how much intelligence they have? They would be incurably insane. Yes, I would agree to ignore their volition, no matter how many they are.
The PD reasoning to cooperate only applies in case of iterated PD
Err. What about arguments of Douglas Hofstadter and EY, and decision theories like TDT?
Unlike PD, the payoffs are different between players, and players are not sure of each other’s payoffs in each scenario
This doesn’t really matter for a broad range of possible payoff matrices.
join research alliance 1, learn its research secrets, then defect and sell the secrets to alliance 2
Cooperating in this game would mean there is exactly one global research alliance. A cooperating move is a precommitment to abide by its rules. Enforcing such precommitment is a separate problem. Let’s assume it’s solved.
I’m not convinced by this that it’s an easier problem to solve than that of building AGI or FAI or CEV.
Maybe you’re right. But IMHO it’s a less interesting problem :)
The resources are not scarce, yet the CEV-s want to kill? Why?
Sorry for the confusion. Let’s taboo “scarce” and start from scratch.
I’m talking about a scenario where—to simplify only slightly from the real world—there exist some finite (even if growing) resources such that almost everyone, no matter how much they already have, want more of. A coalition of 80% of the population forms, which would like to kill the other 20% in order to get their resources. Would the AI prevent this, althogh there is no consensus against the killing?
If you still want to ask whether the resource is “scarce”, please specify what that means exactly. Maybe any finite and highly desireable resource, with returns diminishing weakly or not at all, can be considered “scarce”.
It would do so only if everybody’s CEV-s agree that updating these people’s beliefs is a good thing.
People that would still have false factual beliefs no matter how much evidence and how much intelligence they have? They would be incurably insane. Yes, I would agree to ignore their volition, no matter how many they are.
As I said—this is fine by me insofar as I expect the CEV not to choose to ignore me. (Which means it’s not fine through the Rawlsian veil of ignorance, but I don’t care and presumably neither do you.)
The question of definition, who is to be included in the CEV? or—who is considered sane? becomes of paramount importance. Since it is not itself decided by the CEV, it is presumably hardcoded into the AI design (or evolves within that design as the AI self-modifies, but that’s very dangerous without formal proofs that it won’t evolve to include the “wrong” people.) The simplest way to hardcode it is to directly specify the people to be included, but you prefer testing on qualifications.
However this is realized, it would give people even more incentive to influence or stop your AI building process or to start their own to compete, since they would be afraid of not being included in the CEV used by your AI.
The PD reasoning to cooperate only applies in case of iterated PD
Err. What about arguments of Douglas Hofstadter and EY, and decision theories like TDT?
TDT applies where agents are “similar enough”. I doubt I am similar enough to e.g. the people you labelled insane.
Which arguments of Hofstadter and Yudkowsky do you mean?
Cooperating in this game would mean there is exactly one global research alliance.
Why? What prevents several competing alliances (or single players) from forming, competing for the cooperation of the smaller players?
A coalition of 80% of the population forms, which would like to kill the other 20% in order to get their resources
I have trouble thinking of a resource that would make even one person’s CEV, let alone 80%, want to kill people, in order to just have more of it.
The question of definition, who is to be included in the CEV? or—who is considered sane?
This is easy, and does not need any special hardcoding. If someone is so insane that their beliefs are totally closed and impossible to move by knowledge and intelligence, then their CEV is undefined. Thus, they are automatically excluded.
TDT applies where agents are “similar enough”. I doubt I am similar enough to e.g. the people you labelled insane.
We are talking about people building FAI-s. Surely they are intelligent enough to notice the symmetry between themselves. If you say that logic and rationality makes you decide to ‘defect’ (=try to build FAI on your own, bomb everyone else), then logic and rationality would make everyone decide to defect. So everybody bombs everybody else, no FAI gets built, everybody loses. Instead you can ‘cooperate’ (=precommit to build FAI<everybody’s CEV> and to bomb everyone that did not make the same precommitment). This gets us a single global alliance.
I have trouble thinking of a resource that would make even one person’s CEV, let alone 80%, want to kill people, in order to just have more of it.
shrug Space (land or whatever is being used). Mass and energy. Natural resources. Computing power. Finite-supply money and luxuries if such exist.
Or are you making an assumption that CEVs are automatically more altruistic or nice than non-extrapolated human volitions are?
This is easy, and does not need any special hardcoding. If someone is so insane that their beliefs are totally closed and impossible to move by knowledge and intelligence, then their CEV is undefined. Thus, they are automatically excluded.
Well it does need hardcoding: you need to tell the CEV to exclude people whose EVs are too similar to their current values despite learning contrary facts. Or even all those whose belief-updating process differs too much from perfect Bayesian (and how much is too much?) This is something you’d hardcode in, because you could also write (“hardcode”) a CEV that does include them, allowing them to keep the EVs close to their current values.
Not that I’m opposed to this decision (if you must have CEV at all).
We are talking about people building FAI-s. Surely they are intelligent enough to notice the symmetry between themselves.
There’s a symmetry, but “first person to complete AI wins, everyone ‘defects’” is also a symmetrical situation. Single-iteration PD is symmetrical, but everyone defects. Mere symmetry is not sufficient for TDT-style “decide for everyone”, you need similarity that includes similarly valuing the same outcomes. Here everyone values the outcome “have the AI obey ME!”, which is not the same.
If you say that logic and rationality makes you decide to ‘defect’ (=try to build FAI on your own, bomb everyone else), then logic and rationality would make everyone decide to defect. So everybody bombs everybody else, no FAI gets built, everybody loses.
Or someone is stronger than everyone else, wins the bombing contest, and builds the only AI. Or someone succeeds in building an AI in secret, avoiding being bombed. Or there’s a player or alliance that’s strong enough to deter bombing due to the threat of retaliation, and so completes their AI which doesn’t care about everyone else much. There are many possible and plausible outcomes besides “everybody loses”.
Instead you can ‘cooperate’ (=precommit to build FAI<everybody’s CEV> and to bomb everyone that did not make the same precommitment). This gets us a single global alliance.
Or while the alliance is still being built, a second alliance or very strong player bombs them to get the military advantages of a first strike. Again, there are other possible outcomes besides what you suggest.
Space (land or whatever is being used). Mass and energy. Natural resources. Computing power. Finite-supply money and luxuries if such exist. Or are you making an assumption that CEVs are automatically more altruistic or nice than non-extrapolated human volitions are?
These all have property that you only need so much of them. If there is a sufficient amount for everybody, then there is no point in killing in order to get more. I expect CEV-s to not be greedy just for the sake of greed. It’s people’s CEV-s we’re talking about, not paperclip maximizers’.
Well it does need hardcoding: you need to tell the CEV to exclude people whose EVs are too similar to their current values despite learning contrary facts. Or even all those whose belief-updating process differs too much from perfect Bayesian (and how much is too much?) This is something you’d hardcode in, because you could also write (“hardcode”) a CEV that does include them, allowing them to keep the EVs close to their current values.
Hmm, we are starting to argue about exact details of extrapolation process...
There are many possible and plausible outcomes besides “everybody loses”.
Lets formalize the problem. Let F(R, Ropp) be the probability of a team successfully building a FAI first, given R resources, and having opposition with Ropp resources. Let Uself, Ueverybody, and Uother be the rewards for being first in building FAI, FAI, and FAI, respectively. Naturally, F is monotonically increasing in R and decreasing in Ropp, and Uother < Ueverybody < Uself.
Assume there are just two teams, with resources R1 and R2, and each can perform one of two actions: “cooperate” or “defect”. Let’s compute the expected utilities for the first team:
We cooperate, opponent team cooperates:
EU("CC") = Ueverybody * F(R1+R2, 0)
We cooperate, opponent team defects:
EU("CD") = Ueverybody * F(R1, R2) + Uother * F(R2, R1)
We defect, opponent team cooperates:
EU("DC") = Uself * F(R1, R2) + Ueverybody * F(R2, R1)
We defect, opponent team defects:
EU("DD") = Uself * F(R1, R2) + Uother * F(R2, R1)
Then, EU(“CD”) < EU(“DD”) < EU(“DC”), which gives us most of the structure of a PD problem. The rest, however, depends on the finer details. Let A = F(R1,R2)/F(R1+R2,0) and B = F(R2,R1)/F(R1+R2,0). Then:
If Ueverybody ⇐ Uself*A + Uother*B, then EU(“CC”) < EU(“DD”), and there is no point in cooperating. This is your position: Ueverybody is much less than Uself, or Uother is not much less than Ueverybody, and/or your team has so much more resources than the other.
If Uself*A + Uother*B < Ueverybody < Uself*A/(1-B), this is a true Prisoner’s dilemma.
If Ueverybody >= Uself*A/(1-B), then EU(“CC”) >= EU(“DC”), and “cooperate” is the obviously correct decision. This is my position: Ueverybody is not much less than Uself, and/or the teams are more evenly matched.
These all have property that you only need so much of them.
All of those resources are fungible and can be exchanged for time. There might be no limit to the amount of time people desire, even very enlightened posthuman people.
I don’t think you can get an everywhere-positive exchange rate. There are diminishing returns and a threshold, after which, exchanging more resources won’t get you any more time. There’s only 30 hours in a day, after all :)
You can use some resources like computation directly and in unlimited amounts (e.g. living for unlimitedly long virtual times per real second inside a simulation). There are some physical limits on that due to speed of light limiting effective brain size, but that depends on brain design and anyway the limits seem to be pretty high.
More generally: number of configurations physically possible in a given volume of space is limited (by the entropy of a black hole). If you have a utility function unbounded from above, as it rises it must eventually map to states that describe more space or matter than the amount you started with. Any utility maximizer with unbounded utility eventually wants to expand.
I don’t know what the exchange rates are, but it does cost something (computer time, energy, negentropy) to stay alive. That holds for simulated creatures too. If the available resources to keep someone alive are limited, then I think there will be conflict over those resources.
Naturally, F is monotonically increasing in R and decreasing in Ropp
You’re treating resources as one single kind, where really there are many kinds with possible trades between teams. Here you’re ignoring a factor that might actually be crucial to encouraging cooperation (I’m not saying I showed this formally :-)
Assume there are just two teams
But my point was exactly that there would be many teams who could form many different alliances. Assuming only two is unrealistic and just ignores what I was saying. I don’t even care much if given two teams the correct choice is to cooperate, because I set very low probability on there being exactly two teams and no other independent players being able to contribute anything (money, people, etc) to one of the teams.
This is my position
You still haven’t given good evidence for holding this position regarding the relation between the different Uxxx utilities. Except for the fact CEV is not really specified, so it could be built so that that would be true. But equally it could be built so that that would be false. There’s no point in arguing over which possibility “CEV” really refers to (although if everyone agreed on something that would clear up a lot of debates); the important questions are what do we want a FAI to do if we build one, and what we anticipate others to tell their FAIs to do.
You’re treating resources as one single kind, where really there are many kinds with possible trades between teams
I think this is reasonably realistic. Let R signify money. Then R can buy other necessary resources.
But my point was exactly that there would be many teams who could form many different alliances. Assuming only two is unrealistic and just ignores what I was saying.
We can model N teams by letting them play two-player games in succession. For example, any two teams with nearly matched resources would cooperate with each other, producing a single combined team, etc… This may be an interesting problem to solve, analytically or by computer modeling.
You still haven’t given good evidence for holding this position regarding the relation between the different Uxxx utilities.
You’re right. Initially, I thought that the actual values of Uxxx-s will not be important for the decision, as long as their relative preference order is as stated. But this turned out to be incorrect. There are regions of cooperation and defection.
Analytically, I don’t a priori expect a succession of two-player games to have the same result as one many-player game which also has duration in time and not just a single round.
Because there’s no consensus, your version of CEV would not interfere, and the 80% would be free to kill the 20%.
There may be a distinction between “the AI will not prevent the 80% from killing the 20%” and “nothing will prevent the 80% from killing the 20%” that is getting lost in your phrasing. I am not convinced that the math doesn’t make them equivalent, in the long run—but I’m definitely not convinced otherwise.
I’m assuming the 80% are capable of killing the 20% unless the AI interferes. That’s part of the thought experiment. It’s not unreasonable, since they are 4 times as numerous. But if you find this problematic, suppose it’s 99% killing 1% at a time. It doesn’t really matter.
My point is that we currently have methods of preventing this that don’t require an AI, and which do pretty well. Why do we need the AI to do it? Or more specifically, why should we reject an AI that won’t, but may do other useful things?
There have been, and are, many mass killings of minority groups and of enemy populations and conscripted soldiers at war. If we cure death and diseases, this will become the biggest cause of death and suffering in the world. It’s important and we’ll have to deal with it eventually.
The AI under discussion not just won’t solve the problem, it would (I contend) become a singleton and prevent me from building another AI that does solve the problem. (If it chooses not to become a singleton, it will quickly be supplanted by an AI that does try to become one.)
I wouldn’t like it. But if the alternative is, for example, to have FAI directly enforce the values of the minority on the majority (or vice versa) - the values that would make them kill in order to satisfy/prevent—then I prefer FAI not interfering.
If the resources are so scarce that dividing them is so important that even CEV-s agree on the necessity of killing, then again, I prefer humans to decide who gets them.
No. CEV does not updates anyone’s beliefs. It is calculated by extrapolating values in the presence of full knowledge and sufficient intelligence.
As I said elsewhere, if a person’s beliefs are THAT incompatible with truth, I’m ok with ignoring their volition. Note, that their CEV is undefined in this case. But I don’t believe there exist such people (excluding totally insane).
But the total loss would be correspondingly worse. PD reasoning says you should cooperate (assuming cooperation is precommittable).
Off the top of my head, adoption of total transparency for everybody of all governmental and military matters.
The resources are not scarce at all. But, there’s no consensus of CEVs. The CEVs of 80% want to kill the rest. The CEVs of 20% obviously don’t want to be killed. Because there’s no consensus, your version of CEV would not interfere, and the 80% would be free to kill the 20%.
I meant that the AI that implements your version of CEV would forcibly update people’s actual beliefs to match what it CEV-extrapolated for them. Sorry for the confusion.
A case could be made that many millions of religious “true believers” have un-updatable 0⁄1 probabilities. And so on.
Your solution is to not give them a voice in the CEV at all. Which is great for the rest of us—our CEV will include some presumably reduced term for their welfare, but they don’t get to vote on things. This is something I would certainly support in a FAI (regardless of CEV), just as I would support using CEV or even CEV to CEV.
The only difference between us then is that I estimate there to be many such people. If you believed there were many such people, would you modify your solution, or is ignoring them however many they are fine by you?
As I said before, this reasoning is inapplicable, because this situation is nothing like a PD.
The PD reasoning to cooperate only applies in case of iterated PD, whereas creating a singleton AI is a single game.
Unlike PD, the payoffs are different between players, and players are not sure of each other’s payoffs in each scenario. (E.g., minor/weak players are more likely to cooperate than big ones that are more likely to succeed if they defect.)
The game is not instantaneous, so players can change their strategy based on how other players play. When they do so they can transfer value gained by themselves or by other players (e.g. join research alliance 1, learn its research secrets, then defect and sell the secrets to alliance 2).
It is possible to form alliances, which gain by “defecting” as a group. In PD, players cannot discuss alliances or trade other values to form them before choosing how to play.
There are other games going on between players, so they already have knowledge and opinions and prejudices about each other, and desires to cooperate with certain players and not others. Certain alliances will form naturally, others won’t.
This counts as very weak evidence because it proves it’s at least possible to achieve this, yes. (If all players very intensively inspect all other players to make sure a secret project isn’t being hidden anywhere—they’d have to recruit a big chunk of the workforce just to watch over all the rest.)
But the probability of this happening in the real world, between all players, as they scramble to be the first to build an apocalyptic new weapon, is so small it’s not even worth discussion time. (Unless you disagree, of course.) I’m not convinced by this that it’s an easier problem to solve than that of building AGI or FAI or CEV.
The resources are not scarce, yet the CEV-s want to kill? Why?
It would do so only if everybody’s CEV-s agree that updating these people’s beliefs is a good thing.
People that would still have false factual beliefs no matter how much evidence and how much intelligence they have? They would be incurably insane. Yes, I would agree to ignore their volition, no matter how many they are.
Err. What about arguments of Douglas Hofstadter and EY, and decision theories like TDT?
This doesn’t really matter for a broad range of possible payoff matrices.
Cooperating in this game would mean there is exactly one global research alliance. A cooperating move is a precommitment to abide by its rules. Enforcing such precommitment is a separate problem. Let’s assume it’s solved.
Maybe you’re right. But IMHO it’s a less interesting problem :)
Sorry for the confusion. Let’s taboo “scarce” and start from scratch.
I’m talking about a scenario where—to simplify only slightly from the real world—there exist some finite (even if growing) resources such that almost everyone, no matter how much they already have, want more of. A coalition of 80% of the population forms, which would like to kill the other 20% in order to get their resources. Would the AI prevent this, althogh there is no consensus against the killing?
If you still want to ask whether the resource is “scarce”, please specify what that means exactly. Maybe any finite and highly desireable resource, with returns diminishing weakly or not at all, can be considered “scarce”.
As I said—this is fine by me insofar as I expect the CEV not to choose to ignore me. (Which means it’s not fine through the Rawlsian veil of ignorance, but I don’t care and presumably neither do you.)
The question of definition, who is to be included in the CEV? or—who is considered sane? becomes of paramount importance. Since it is not itself decided by the CEV, it is presumably hardcoded into the AI design (or evolves within that design as the AI self-modifies, but that’s very dangerous without formal proofs that it won’t evolve to include the “wrong” people.) The simplest way to hardcode it is to directly specify the people to be included, but you prefer testing on qualifications.
However this is realized, it would give people even more incentive to influence or stop your AI building process or to start their own to compete, since they would be afraid of not being included in the CEV used by your AI.
TDT applies where agents are “similar enough”. I doubt I am similar enough to e.g. the people you labelled insane.
Which arguments of Hofstadter and Yudkowsky do you mean?
Why? What prevents several competing alliances (or single players) from forming, competing for the cooperation of the smaller players?
I have trouble thinking of a resource that would make even one person’s CEV, let alone 80%, want to kill people, in order to just have more of it.
This is easy, and does not need any special hardcoding. If someone is so insane that their beliefs are totally closed and impossible to move by knowledge and intelligence, then their CEV is undefined. Thus, they are automatically excluded.
We are talking about people building FAI-s. Surely they are intelligent enough to notice the symmetry between themselves. If you say that logic and rationality makes you decide to ‘defect’ (=try to build FAI on your own, bomb everyone else), then logic and rationality would make everyone decide to defect. So everybody bombs everybody else, no FAI gets built, everybody loses. Instead you can ‘cooperate’ (=precommit to build FAI<everybody’s CEV> and to bomb everyone that did not make the same precommitment). This gets us a single global alliance.
shrug Space (land or whatever is being used). Mass and energy. Natural resources. Computing power. Finite-supply money and luxuries if such exist.
Or are you making an assumption that CEVs are automatically more altruistic or nice than non-extrapolated human volitions are?
Well it does need hardcoding: you need to tell the CEV to exclude people whose EVs are too similar to their current values despite learning contrary facts. Or even all those whose belief-updating process differs too much from perfect Bayesian (and how much is too much?) This is something you’d hardcode in, because you could also write (“hardcode”) a CEV that does include them, allowing them to keep the EVs close to their current values.
Not that I’m opposed to this decision (if you must have CEV at all).
There’s a symmetry, but “first person to complete AI wins, everyone ‘defects’” is also a symmetrical situation. Single-iteration PD is symmetrical, but everyone defects. Mere symmetry is not sufficient for TDT-style “decide for everyone”, you need similarity that includes similarly valuing the same outcomes. Here everyone values the outcome “have the AI obey ME!”, which is not the same.
Or someone is stronger than everyone else, wins the bombing contest, and builds the only AI. Or someone succeeds in building an AI in secret, avoiding being bombed. Or there’s a player or alliance that’s strong enough to deter bombing due to the threat of retaliation, and so completes their AI which doesn’t care about everyone else much. There are many possible and plausible outcomes besides “everybody loses”.
Or while the alliance is still being built, a second alliance or very strong player bombs them to get the military advantages of a first strike. Again, there are other possible outcomes besides what you suggest.
These all have property that you only need so much of them. If there is a sufficient amount for everybody, then there is no point in killing in order to get more. I expect CEV-s to not be greedy just for the sake of greed. It’s people’s CEV-s we’re talking about, not paperclip maximizers’.
Hmm, we are starting to argue about exact details of extrapolation process...
Lets formalize the problem. Let F(R, Ropp) be the probability of a team successfully building a FAI first, given R resources, and having opposition with Ropp resources. Let Uself, Ueverybody, and Uother be the rewards for being first in building FAI, FAI, and FAI, respectively. Naturally, F is monotonically increasing in R and decreasing in Ropp, and Uother < Ueverybody < Uself.
Assume there are just two teams, with resources R1 and R2, and each can perform one of two actions: “cooperate” or “defect”. Let’s compute the expected utilities for the first team:
Then, EU(“CD”) < EU(“DD”) < EU(“DC”), which gives us most of the structure of a PD problem. The rest, however, depends on the finer details. Let A = F(R1,R2)/F(R1+R2,0) and B = F(R2,R1)/F(R1+R2,0). Then:
If Ueverybody ⇐ Uself*A + Uother*B, then EU(“CC”) < EU(“DD”), and there is no point in cooperating. This is your position: Ueverybody is much less than Uself, or Uother is not much less than Ueverybody, and/or your team has so much more resources than the other.
If Uself*A + Uother*B < Ueverybody < Uself*A/(1-B), this is a true Prisoner’s dilemma.
If Ueverybody >= Uself*A/(1-B), then EU(“CC”) >= EU(“DC”), and “cooperate” is the obviously correct decision. This is my position: Ueverybody is not much less than Uself, and/or the teams are more evenly matched.
All of those resources are fungible and can be exchanged for time. There might be no limit to the amount of time people desire, even very enlightened posthuman people.
I don’t think you can get an everywhere-positive exchange rate. There are diminishing returns and a threshold, after which, exchanging more resources won’t get you any more time. There’s only 30 hours in a day, after all :)
You can use some resources like computation directly and in unlimited amounts (e.g. living for unlimitedly long virtual times per real second inside a simulation). There are some physical limits on that due to speed of light limiting effective brain size, but that depends on brain design and anyway the limits seem to be pretty high.
More generally: number of configurations physically possible in a given volume of space is limited (by the entropy of a black hole). If you have a utility function unbounded from above, as it rises it must eventually map to states that describe more space or matter than the amount you started with. Any utility maximizer with unbounded utility eventually wants to expand.
I don’t know what the exchange rates are, but it does cost something (computer time, energy, negentropy) to stay alive. That holds for simulated creatures too. If the available resources to keep someone alive are limited, then I think there will be conflict over those resources.
You’re treating resources as one single kind, where really there are many kinds with possible trades between teams. Here you’re ignoring a factor that might actually be crucial to encouraging cooperation (I’m not saying I showed this formally :-)
But my point was exactly that there would be many teams who could form many different alliances. Assuming only two is unrealistic and just ignores what I was saying. I don’t even care much if given two teams the correct choice is to cooperate, because I set very low probability on there being exactly two teams and no other independent players being able to contribute anything (money, people, etc) to one of the teams.
You still haven’t given good evidence for holding this position regarding the relation between the different Uxxx utilities. Except for the fact CEV is not really specified, so it could be built so that that would be true. But equally it could be built so that that would be false. There’s no point in arguing over which possibility “CEV” really refers to (although if everyone agreed on something that would clear up a lot of debates); the important questions are what do we want a FAI to do if we build one, and what we anticipate others to tell their FAIs to do.
I think this is reasonably realistic. Let R signify money. Then R can buy other necessary resources.
We can model N teams by letting them play two-player games in succession. For example, any two teams with nearly matched resources would cooperate with each other, producing a single combined team, etc… This may be an interesting problem to solve, analytically or by computer modeling.
You’re right. Initially, I thought that the actual values of Uxxx-s will not be important for the decision, as long as their relative preference order is as stated. But this turned out to be incorrect. There are regions of cooperation and defection.
Analytically, I don’t a priori expect a succession of two-player games to have the same result as one many-player game which also has duration in time and not just a single round.
There may be a distinction between “the AI will not prevent the 80% from killing the 20%” and “nothing will prevent the 80% from killing the 20%” that is getting lost in your phrasing. I am not convinced that the math doesn’t make them equivalent, in the long run—but I’m definitely not convinced otherwise.
I’m assuming the 80% are capable of killing the 20% unless the AI interferes. That’s part of the thought experiment. It’s not unreasonable, since they are 4 times as numerous. But if you find this problematic, suppose it’s 99% killing 1% at a time. It doesn’t really matter.
My point is that we currently have methods of preventing this that don’t require an AI, and which do pretty well. Why do we need the AI to do it? Or more specifically, why should we reject an AI that won’t, but may do other useful things?
There have been, and are, many mass killings of minority groups and of enemy populations and conscripted soldiers at war. If we cure death and diseases, this will become the biggest cause of death and suffering in the world. It’s important and we’ll have to deal with it eventually.
The AI under discussion not just won’t solve the problem, it would (I contend) become a singleton and prevent me from building another AI that does solve the problem. (If it chooses not to become a singleton, it will quickly be supplanted by an AI that does try to become one.)