A team with a billion dollar budget would need to secure something like a 10,000-fold increase in productivity in order to outcompete the rest of the world.
This requirement seems too strong.
First a project doesn’t need to monopolize more than half of the world economic throughput to succeed in obtaining a decisive strategic advantage. A small targeted fraction of the economy (say the armies of all countries in the northern hemisphere) may be more than enough. If the pond is chosen correctly, the fish needn’t be that big.
Second: a project that is able to efficiently steal at a faster absolute pace than the world economy grows could over the long run dominate it by capturing most created value, without having to go through the honest toil itself.
Third: if we narrow our defition of achievement of a strategic advantage as being merely the ability to stop the behemoth of evolutionary selection pressures at the many levels where they exist—the original intent of the concept of Singleton in papers like “What is a Singleton?” and “The Future of Human Evolution”—then even less would be necessary. A Singleton could be a specialist in it’s ability to countenance abrupt changes in evolution—and nothing else—and it would still serve the purpose of avoiding maximal efficiency clans, thereby protecting flamboyant displays and possibly happiness and consciousness on the side.
Note that a 10,000-fold increase would leave you with a modest fraction of total output. A 1,000-fold increase would still leave you smaller than world’s militaries.
The theft scenario relies on you being better at theft than the rest of the world. From the outside AI looks more suited to productive activity than war (which also requires big capital investments in personnel and equipment), so I normally think of this as being a counterbalancing factor (that is, it seems significantly more likely that an economically dominant firm would find many of its productive assets confiscated by force, than that a productive firm would begin stealing the world’s resources without serious repercussions). Of course a primary consideration in this discussion is the nature of conflict; in the current world a sufficiently sophisticated AI might fare reasonably well in all-out conflict due to the world being completely unprepared, but I would be quite surprised if that were still the case when the development of human-level AI actually looked plausible. It seems to again come down to the possibility of a rapid and unexpected jump in capabilities.
The most realistic paths along these lines seem to depend on abrupt changes in the importance of different resources. For example, you might think that AI research capacity would change from a small part of the world to the lion’s share (or some even more specific kind of research). If such a change were slow it wouldn’t much matter, but if it were fast it could also lead to a decisive strategic advantage, as a company with 0.01% of the world’s resources could find itself with 90% after a drastic reevaluation.
Of course, whether such an abrupt change should happen is again closely related to whether an abrupt change in capacities will happen (though it relies on the further assumption that such a change is unanticipated). Overall I think that it is not too likely, and that the probability can be driven down substantially by clearer communication and decision-making. It still seems like a scenario worth thinking about.
All told, agree with Bostrom that there are a number of reasons to think that very fast changes in capability would lead to a decisive strategic advantage, especially if it came as a surprise to most of the world. Short of rapid changes, I don’t see much reason to think of AI as exceptional.
Regarding evolution I disagree completely with the implicit model of how the future will go or why it might contain morally valuable things. See here for a discussion of my view. On my view, a “singleton” with this more limited capacity would not be very important. Indeed, I’m not even sure what it would mean.
There are two mixed strategies worth noting that facilitate theft.
One is propagating and stealing identities. Instead of stealing from a company, you merely become the company wherever it is represented in coded from (name, signature, brand, bank account, online representation etc...). Propagating identities would be the just creating other AI’s, subsets of you or changed copies, that by design were made such that they are considered different entities, and therefore you, the AI, cannot be held accountable for their actions. I’d expect Ben Goertzel to have interesting thoughts on this, though no pointers come to mind.
The other is mixing theft, production and destruction of coordination. The Italian mafia for instance, did all three. They control legitimate businesses, they control part of the law-enforcement agency (that is, they act as if they were the police or legitimate power) and they destroy crucial nodes of coordination within the legitimate police force. So not only they steal power, but they also make sure that (valuable) coordination is shattered.
Promoting anti-coordination is an interesting move in warfare, and I see no reason why an AI would refrain from using it to obtain a decisive strategic advantage, if growth in power and theft were not proving to be enough.
I agree with your conditional statements. IF coordination and clear communication THEN low probability of abrupt transformational deleterious change. IF rapid and strategically relevant power obtained THEN it was likely preceded by swift changes in importance of different resources.
Our disagreement seems to hinge on the comparative likelihood of those premises, more than different views on how systems work in this context.
I’ll transfer discussion of evolution—where real disagreement seems to lurk—to your blog to save brainpower from readers here.
Diametral opposing to theft and war szenarios you discuss in
your paper
“Rational Altruist—Why might the future be good?”:
How much altruism do we expect?
[...] my median expectation is that the future is much more altruistic than the present.
I fully agree with you and this aspect is lacking in Bostrums book. The FOOM—singleton theory intrinsically assumes
egoistic AIs.
Altruism is for me one of the core ingredience towards sustainably incorporating friendly AIs into society. I support
your view that the future will be more altruistic than the present: AIs will have more memory to remember
behavior of their contacts. The Dunbar’s number of social contacts will be higher. Social contacts recognize altruistic behavior and remember
this good deed for the future. The wider the social net the higher is the reward for altruistic behavior.
The result shows that, as predicted, even when controlling for a range of individual and relationship factors,
the network factor (number of connections) makes a significant contribution to altruism, thus showing that individuals are more likely to be altruistic to better-connected members of their social networks.
The idea of AIs and humans monitoring AIs in a constitutional society is not new. Stephen Omohundro presented it in October 2007 at the Stanford EE380
Computer Systems Colloquium on “Self-Improving Artificial Intelligence and the Future of Computing”.
Q: What about malicious mutations [of the utility function]?
Stephen Omohundro:
Dealing with malicious things is very important.
There is an organization—Eliezer is here in the back—he called it the Singularity Institute
for Artificial Intelligence, which is trying to ensure that the consequences of these kinds of
systems are immune to malicious agents and to accidental unintended consequences. And it is one of the great
challenges right now because if you assume that this kind of system is possible and has the kinds
of powers we are talking about, it can be useable for great good but also for bad purposes.
And so finding a structure which is stable—and I think I agree with Eric [Baum?]-
that the ultimate kind of solution that makes more
sense to me is essentially have a large ecology of intelligent agents and humans. Such that in a
kind of a constitution that everybody follows:
Humans probably will not be able
to monitor AIs, because they are thinking faster and more powerfully,
but AIs could monitor AIs.
So we set up a structure so that each entity wants to obey the “law”, wants to
follow the constitution, wants to respect all the various rights that we would to decide on.
And if somebody starts violating the law that they have an interest in stopping them from doing that.
The hope is that we can create basically a stable future of society with these kinds of entities.
The thinking of this is just beginning on that. I think a lot of input is needed from economists,
is needed from psychologists, [...] and sociologists [...] as well as computer systems engineers. I mean we really need
input from a wide variety of vizpoints.
The FOOM—singleton theory intrinsically assumes egoistic AIs.
No, that’s wrong. The speed of takeoff is largely a technical question; from a strategic planning POV, going through a rapid takeoff likely makes sense regardless of what your goals are (unless your friendliness design is incomplete /and/ you have corrigibility aspects; but that’s a very special case).
As for what you do once you’re done, that does indeed depend on your goals; but forming a singleton doesn’t imply egoism or egocentrism of any kind. Your goals can still be entirely focused on other entities in society; it’s just that if have certain invariants you want to enforce on them (could be anything, really; things like “no murder”, “no extensive torture”, “no destroying society” would be unoffensive and relevant examples) - or indeed, more generally, certain aspects to optimize for—it helps a lot if you can stay in ultimate control to do these things.
As Bostrom explains in his footnotes, there are many kinds of singletons. In general, it simply refers to an entity that has attained and keeps ultimate power in society. How much or how little it uses that power to control any part of the world is independent of that, and some singletons would interfere little with the rest of society.
Your argumentation based on the orthogonality principle is clear to me. But even if the utility function includes human values (fostering humankind, preserving a sustainable habitat on earth for humans, protecting humans against unfriendly AI developments, solving the control problem) strong egoistic traits are needed to remain superior to other upcoming AIs. Ben Goertzel coined the term “global AI Nanny” for a similar concept.
How would we get notion of existence of a little interfering FAI singleton?
Do we accept that this FAI wages military war against a sandboxed secret unfriendly AI development project?
How would we get notion of existence of a little interfering FAI singleton?
The AI’s values would likely have to be specifically chosen to get this outcome; something like “let human development continue normally, except for blocking existential catastrophes”. Something like that won’t impact what you’re trying to do, unless that involves destroying society or something equally problematic.
Do we accept that this FAI wages military war against a sandboxed secret unfriendly AI development project?
Above hypothetical singleton AI would end up either sabotaging the project, or containing the resulting AI. It wouldn’t have to stop the UFAI before release, necessarily; with enough of a hardware headstart, later safe containment can be guaranteed fine. Either way, the intervention needn’t involve attacking humans; interfering with just the AI’s hardware can accomplish the same result. And certainly the development project shouldn’t get much chance to fight back; terms like “interdiction”, “containment”, “sabotage”, and maybe “police action” (though that one has unfortunate anthropomorphic connotations) are a better fit than “war”.
It seems to again come down to the possibility of a rapid and unexpected jump in capabilities.
We could test it in thought experiment.
Chess game human-grandmaster against AI.
it is not rapid (not checkmate in begining). We could also suppose one move per year to slow it down. It bring to AI next advantage because it’s ability to concentrate so long time.
capabilities a) intellectual capabilities we could suppose at same level during the game (if it is played in one day, otherwise we have to think Moore’s law) b) human lose (step by step) positional and material capabilities during the game. And it is expected
Could we still talk about decisive advantage if it is not rapid and not unexpected? I think so. At least if we won’t break the rules.
This requirement seems too strong.
First a project doesn’t need to monopolize more than half of the world economic throughput to succeed in obtaining a decisive strategic advantage. A small targeted fraction of the economy (say the armies of all countries in the northern hemisphere) may be more than enough. If the pond is chosen correctly, the fish needn’t be that big.
Second: a project that is able to efficiently steal at a faster absolute pace than the world economy grows could over the long run dominate it by capturing most created value, without having to go through the honest toil itself.
Third: if we narrow our defition of achievement of a strategic advantage as being merely the ability to stop the behemoth of evolutionary selection pressures at the many levels where they exist—the original intent of the concept of Singleton in papers like “What is a Singleton?” and “The Future of Human Evolution”—then even less would be necessary. A Singleton could be a specialist in it’s ability to countenance abrupt changes in evolution—and nothing else—and it would still serve the purpose of avoiding maximal efficiency clans, thereby protecting flamboyant displays and possibly happiness and consciousness on the side.
Note that a 10,000-fold increase would leave you with a modest fraction of total output. A 1,000-fold increase would still leave you smaller than world’s militaries.
The theft scenario relies on you being better at theft than the rest of the world. From the outside AI looks more suited to productive activity than war (which also requires big capital investments in personnel and equipment), so I normally think of this as being a counterbalancing factor (that is, it seems significantly more likely that an economically dominant firm would find many of its productive assets confiscated by force, than that a productive firm would begin stealing the world’s resources without serious repercussions). Of course a primary consideration in this discussion is the nature of conflict; in the current world a sufficiently sophisticated AI might fare reasonably well in all-out conflict due to the world being completely unprepared, but I would be quite surprised if that were still the case when the development of human-level AI actually looked plausible. It seems to again come down to the possibility of a rapid and unexpected jump in capabilities.
The most realistic paths along these lines seem to depend on abrupt changes in the importance of different resources. For example, you might think that AI research capacity would change from a small part of the world to the lion’s share (or some even more specific kind of research). If such a change were slow it wouldn’t much matter, but if it were fast it could also lead to a decisive strategic advantage, as a company with 0.01% of the world’s resources could find itself with 90% after a drastic reevaluation.
Of course, whether such an abrupt change should happen is again closely related to whether an abrupt change in capacities will happen (though it relies on the further assumption that such a change is unanticipated). Overall I think that it is not too likely, and that the probability can be driven down substantially by clearer communication and decision-making. It still seems like a scenario worth thinking about.
All told, agree with Bostrom that there are a number of reasons to think that very fast changes in capability would lead to a decisive strategic advantage, especially if it came as a surprise to most of the world. Short of rapid changes, I don’t see much reason to think of AI as exceptional.
Regarding evolution I disagree completely with the implicit model of how the future will go or why it might contain morally valuable things. See here for a discussion of my view. On my view, a “singleton” with this more limited capacity would not be very important. Indeed, I’m not even sure what it would mean.
There are two mixed strategies worth noting that facilitate theft.
One is propagating and stealing identities. Instead of stealing from a company, you merely become the company wherever it is represented in coded from (name, signature, brand, bank account, online representation etc...). Propagating identities would be the just creating other AI’s, subsets of you or changed copies, that by design were made such that they are considered different entities, and therefore you, the AI, cannot be held accountable for their actions. I’d expect Ben Goertzel to have interesting thoughts on this, though no pointers come to mind.
The other is mixing theft, production and destruction of coordination. The Italian mafia for instance, did all three. They control legitimate businesses, they control part of the law-enforcement agency (that is, they act as if they were the police or legitimate power) and they destroy crucial nodes of coordination within the legitimate police force. So not only they steal power, but they also make sure that (valuable) coordination is shattered.
Promoting anti-coordination is an interesting move in warfare, and I see no reason why an AI would refrain from using it to obtain a decisive strategic advantage, if growth in power and theft were not proving to be enough.
I agree with your conditional statements. IF coordination and clear communication THEN low probability of abrupt transformational deleterious change. IF rapid and strategically relevant power obtained THEN it was likely preceded by swift changes in importance of different resources.
Our disagreement seems to hinge on the comparative likelihood of those premises, more than different views on how systems work in this context.
I’ll transfer discussion of evolution—where real disagreement seems to lurk—to your blog to save brainpower from readers here.
Diametral opposing to theft and war szenarios you discuss in your paper “Rational Altruist—Why might the future be good?”:
I fully agree with you and this aspect is lacking in Bostrums book. The FOOM—singleton theory intrinsically assumes egoistic AIs.
Altruism is for me one of the core ingredience towards sustainably incorporating friendly AIs into society. I support your view that the future will be more altruistic than the present: AIs will have more memory to remember behavior of their contacts. The Dunbar’s number of social contacts will be higher. Social contacts recognize altruistic behavior and remember this good deed for the future. The wider the social net the higher is the reward for altruistic behavior.
Recent research confirms this perspective: Curry, O., & Dunbar, R. I. M. (2011). Altruism in networks: the effect of connections. Biology Letters, 7(5), 651-653:
The idea of AIs and humans monitoring AIs in a constitutional society is not new. Stephen Omohundro presented it in October 2007 at the Stanford EE380 Computer Systems Colloquium on “Self-Improving Artificial Intelligence and the Future of Computing”.
I transcribed part of the Q&A of his talk (starting 51:43)
Stephen Omohundro:
No, that’s wrong. The speed of takeoff is largely a technical question; from a strategic planning POV, going through a rapid takeoff likely makes sense regardless of what your goals are (unless your friendliness design is incomplete /and/ you have corrigibility aspects; but that’s a very special case).
As for what you do once you’re done, that does indeed depend on your goals; but forming a singleton doesn’t imply egoism or egocentrism of any kind. Your goals can still be entirely focused on other entities in society; it’s just that if have certain invariants you want to enforce on them (could be anything, really; things like “no murder”, “no extensive torture”, “no destroying society” would be unoffensive and relevant examples) - or indeed, more generally, certain aspects to optimize for—it helps a lot if you can stay in ultimate control to do these things.
As Bostrom explains in his footnotes, there are many kinds of singletons. In general, it simply refers to an entity that has attained and keeps ultimate power in society. How much or how little it uses that power to control any part of the world is independent of that, and some singletons would interfere little with the rest of society.
Your argumentation based on the orthogonality principle is clear to me. But even if the utility function includes human values (fostering humankind, preserving a sustainable habitat on earth for humans, protecting humans against unfriendly AI developments, solving the control problem) strong egoistic traits are needed to remain superior to other upcoming AIs. Ben Goertzel coined the term “global AI Nanny” for a similar concept.
How would we get notion of existence of a little interfering FAI singleton?
Do we accept that this FAI wages military war against a sandboxed secret unfriendly AI development project?
The AI’s values would likely have to be specifically chosen to get this outcome; something like “let human development continue normally, except for blocking existential catastrophes”. Something like that won’t impact what you’re trying to do, unless that involves destroying society or something equally problematic.
Above hypothetical singleton AI would end up either sabotaging the project, or containing the resulting AI. It wouldn’t have to stop the UFAI before release, necessarily; with enough of a hardware headstart, later safe containment can be guaranteed fine. Either way, the intervention needn’t involve attacking humans; interfering with just the AI’s hardware can accomplish the same result. And certainly the development project shouldn’t get much chance to fight back; terms like “interdiction”, “containment”, “sabotage”, and maybe “police action” (though that one has unfortunate anthropomorphic connotations) are a better fit than “war”.
We could test it in thought experiment.
Chess game human-grandmaster against AI.
it is not rapid (not checkmate in begining).
We could also suppose one move per year to slow it down. It bring to AI next advantage because it’s ability to concentrate so long time.
capabilities
a) intellectual capabilities we could suppose at same level during the game (if it is played in one day, otherwise we have to think Moore’s law)
b) human lose (step by step) positional and material capabilities during the game. And it is expected
Could we still talk about decisive advantage if it is not rapid and not unexpected? I think so. At least if we won’t break the rules.