Is the overall utility of the universe maximized by one universe-spanning consciousness happily paperclipping or by as many utility maximizing discrete agents as possible? It seems ethics must be anthropocentric and utility cannot be maximized against an outside view. This of course means that any alien friendly AI is likely to be an unfriendly AI to us and therefore must do everything to impede any coherent extrapolated volition of humanity so as to subjectively maximize utility by implementing its own CEV. Given such inevitable confrontation one might ask oneself, what advice would I give to aliens that are not interested in burning the cosmic commons over such a conflict? Maybe the best solution from an utilitarian perspective would be to get back to an abstract concept of utility, disregard human nature and ask what would increase the overall utility for most possible minds in the universe?
Is the overall utility of the universe maximized by one universe-spanning consciousness happily paperclipping or by as many utility maximizing discrete agents as possible?
I favor many AIs rather than one big one, mostly for political (balance of power) reasons, but also because:
The idea of maximizing the “utility of the universe” is the kind of idiocy that utilitarian ethics induces. I much prefer the more modest goal “maximize the total utility of those agents currently in your coalition, and adjust that composite utility function as new agents join your coalition and old agents leave.”
Clearly, creating new agents can be good, but the tradeoff is that it dilutes the stake of existing agents in the collective will. I think that a lot of people here forget that economic growth requires the accumulation of capital, and that the only way to accumulate capital is to shortchange current consumption. Having a brilliant AI or lots of smart AIs directing the economy cannot change this fact. So, moderate growth is a better way to go.
Trying to arrive at the future quickly runs too much risk of destroying the future. Maybe that is one good thing about cryonics. It decreases the natural urge to rush things because people are afraid they will die too soon to see the future.
I suppose the question is why you think that the old patterns of industrial organization will continue to apply? That agents will form coalitions and cooperate is generally a good thing, to my mind—the pattern you seem to imagine, in which the powerful join to exploit the powerless can easily be avoided with a better distribution of power and information.
In several ways. The utility function of the collective is (in some sense) a compromise among the utility functions of the individual members—a compromise which is, by definition, acceptable to the members of the coalition. All of them have joined the coalition by their own free (for some definitions of free) choice.
The second difference goes to the heart of things. Not all members of the coalition will upgrade (add hardware, rewrite their own code, or whatever) at the same time. In fact, any coalition member who does upgrade may be thought of as having left the coalition and then repetitioned for membership post-upgrade. After all, its membership needs to be renegotiated since its power has probably changed and its values may have changed.
So, to give the short answer to your question:
If they do join forces, then how is that much different from one big superintelligence?
Because joining forces is not forever. Balance of power is not stasis.
There are some examples in biology of symbiotic coalitions that persist without full union taking place.
Mitochondria didn’t fuse with the cells they invaded; Nitrogen fixing bacteria live independently of their host plant; e-coli bacteria can live without us—and so on.
However, many of these relationships have problems. Arguably, they are due to refactoring failures on nature’s part—and in the future refactoring failures will occur much less frequently.
Already humans take probiotic supplements, in an attempt to control their unruly gut bacteria. Already there is talk about ripping out all the mitochondrial genome and transplanting its genes into the nuclear chromosomes.
This is speculation to some extent—but I think—without a Monopolies and Mergers Commission—the union would deepen, and its constituents would fuse—even in the absence of competitive external forces driving the union—as part of an efficiency drive, to better combat possible future threats. If individual participants objected to this, they would likely find themselves rejected and replaced.
Such a union would soon be forever. There would be no existence outside it—except perhaps for a few bacteria that don’t seem worth absorbing.
Your biological analogies seem compelling, but they are cases in which a population of mortal coalitions evolves under selection to become a more perfect union. The case that we are interested in is only weakly analogous—a single, immortal coalition developing over time according to its own self-interested dynamics.
Is the overall utility of the universe maximized by one universe-spanning consciousness happily paperclipping or by as many utility maximizing discrete agents as possible? It seems ethics must be anthropocentric and utility cannot be maximized against an outside view. This of course means that any alien friendly AI is likely to be an unfriendly AI to us and therefore must do everything to impede any coherent extrapolated volition of humanity so as to subjectively maximize utility by implementing its own CEV. Given such inevitable confrontation one might ask oneself, what advice would I give to aliens that are not interested in burning the cosmic commons over such a conflict? Maybe the best solution from an utilitarian perspective would be to get back to an abstract concept of utility, disregard human nature and ask what would increase the overall utility for most possible minds in the universe?
I favor many AIs rather than one big one, mostly for political (balance of power) reasons, but also because:
The idea of maximizing the “utility of the universe” is the kind of idiocy that utilitarian ethics induces. I much prefer the more modest goal “maximize the total utility of those agents currently in your coalition, and adjust that composite utility function as new agents join your coalition and old agents leave.”
Clearly, creating new agents can be good, but the tradeoff is that it dilutes the stake of existing agents in the collective will. I think that a lot of people here forget that economic growth requires the accumulation of capital, and that the only way to accumulate capital is to shortchange current consumption. Having a brilliant AI or lots of smart AIs directing the economy cannot change this fact. So, moderate growth is a better way to go.
Trying to arrive at the future quickly runs too much risk of destroying the future. Maybe that is one good thing about cryonics. It decreases the natural urge to rush things because people are afraid they will die too soon to see the future.
You perhaps envisage a Monopolies and Mergers Commission—to prevent them from joining forces? As the old joke goes:
“Why is there only one Monopolies and Mergers Commission?”
I suppose the question is why you think that the old patterns of industrial organization will continue to apply? That agents will form coalitions and cooperate is generally a good thing, to my mind—the pattern you seem to imagine, in which the powerful join to exploit the powerless can easily be avoided with a better distribution of power and information.
If they do join forces, then how is that much different from one big superintelligence?
In several ways. The utility function of the collective is (in some sense) a compromise among the utility functions of the individual members—a compromise which is, by definition, acceptable to the members of the coalition. All of them have joined the coalition by their own free (for some definitions of free) choice.
The second difference goes to the heart of things. Not all members of the coalition will upgrade (add hardware, rewrite their own code, or whatever) at the same time. In fact, any coalition member who does upgrade may be thought of as having left the coalition and then repetitioned for membership post-upgrade. After all, its membership needs to be renegotiated since its power has probably changed and its values may have changed.
So, to give the short answer to your question:
Because joining forces is not forever. Balance of power is not stasis.
There are some examples in biology of symbiotic coalitions that persist without full union taking place.
Mitochondria didn’t fuse with the cells they invaded; Nitrogen fixing bacteria live independently of their host plant; e-coli bacteria can live without us—and so on.
However, many of these relationships have problems. Arguably, they are due to refactoring failures on nature’s part—and in the future refactoring failures will occur much less frequently.
Already humans take probiotic supplements, in an attempt to control their unruly gut bacteria. Already there is talk about ripping out all the mitochondrial genome and transplanting its genes into the nuclear chromosomes.
This is speculation to some extent—but I think—without a Monopolies and Mergers Commission—the union would deepen, and its constituents would fuse—even in the absence of competitive external forces driving the union—as part of an efficiency drive, to better combat possible future threats. If individual participants objected to this, they would likely find themselves rejected and replaced.
Such a union would soon be forever. There would be no existence outside it—except perhaps for a few bacteria that don’t seem worth absorbing.
Your biological analogies seem compelling, but they are cases in which a population of mortal coalitions evolves under selection to become a more perfect union. The case that we are interested in is only weakly analogous—a single, immortal coalition developing over time according to its own self-interested dynamics.
http://en.wikipedia.org/wiki/Economy_of_Saudi_Arabia
...is probably one of the nearest things we currently have.