Perhaps the current state of evidence really is insufficient to support the scary hypothesis.
But surely, if one agrees that AI ethics is an existentially important problem, one should also agree that it makes sense for people to work on a theory of AI ethics. Regardless of which hypothesis turns out to be true.
Just because we don’t currently have evidence that a killer asteroid is heading for the Earth, doesn’t mean we shouldn’t look anyway...
I agree, but I want “AI ethics” to mean something different from what you probably mean by it. The question is what sort of ethics we want our AIs to have?
Paperclipping the universe with humans is still paperclipping.
Is the overall utility of the universe maximized by one universe-spanning consciousness happily paperclipping or by as many utility maximizing discrete agents as possible? It seems ethics must be anthropocentric and utility cannot be maximized against an outside view. This of course means that any alien friendly AI is likely to be an unfriendly AI to us and therefore must do everything to impede any coherent extrapolated volition of humanity so as to subjectively maximize utility by implementing its own CEV. Given such inevitable confrontation one might ask oneself, what advice would I give to aliens that are not interested in burning the cosmic commons over such a conflict? Maybe the best solution from an utilitarian perspective would be to get back to an abstract concept of utility, disregard human nature and ask what would increase the overall utility for most possible minds in the universe?
Is the overall utility of the universe maximized by one universe-spanning consciousness happily paperclipping or by as many utility maximizing discrete agents as possible?
I favor many AIs rather than one big one, mostly for political (balance of power) reasons, but also because:
The idea of maximizing the “utility of the universe” is the kind of idiocy that utilitarian ethics induces. I much prefer the more modest goal “maximize the total utility of those agents currently in your coalition, and adjust that composite utility function as new agents join your coalition and old agents leave.”
Clearly, creating new agents can be good, but the tradeoff is that it dilutes the stake of existing agents in the collective will. I think that a lot of people here forget that economic growth requires the accumulation of capital, and that the only way to accumulate capital is to shortchange current consumption. Having a brilliant AI or lots of smart AIs directing the economy cannot change this fact. So, moderate growth is a better way to go.
Trying to arrive at the future quickly runs too much risk of destroying the future. Maybe that is one good thing about cryonics. It decreases the natural urge to rush things because people are afraid they will die too soon to see the future.
I suppose the question is why you think that the old patterns of industrial organization will continue to apply? That agents will form coalitions and cooperate is generally a good thing, to my mind—the pattern you seem to imagine, in which the powerful join to exploit the powerless can easily be avoided with a better distribution of power and information.
In several ways. The utility function of the collective is (in some sense) a compromise among the utility functions of the individual members—a compromise which is, by definition, acceptable to the members of the coalition. All of them have joined the coalition by their own free (for some definitions of free) choice.
The second difference goes to the heart of things. Not all members of the coalition will upgrade (add hardware, rewrite their own code, or whatever) at the same time. In fact, any coalition member who does upgrade may be thought of as having left the coalition and then repetitioned for membership post-upgrade. After all, its membership needs to be renegotiated since its power has probably changed and its values may have changed.
So, to give the short answer to your question:
If they do join forces, then how is that much different from one big superintelligence?
Because joining forces is not forever. Balance of power is not stasis.
There are some examples in biology of symbiotic coalitions that persist without full union taking place.
Mitochondria didn’t fuse with the cells they invaded; Nitrogen fixing bacteria live independently of their host plant; e-coli bacteria can live without us—and so on.
However, many of these relationships have problems. Arguably, they are due to refactoring failures on nature’s part—and in the future refactoring failures will occur much less frequently.
Already humans take probiotic supplements, in an attempt to control their unruly gut bacteria. Already there is talk about ripping out all the mitochondrial genome and transplanting its genes into the nuclear chromosomes.
This is speculation to some extent—but I think—without a Monopolies and Mergers Commission—the union would deepen, and its constituents would fuse—even in the absence of competitive external forces driving the union—as part of an efficiency drive, to better combat possible future threats. If individual participants objected to this, they would likely find themselves rejected and replaced.
Such a union would soon be forever. There would be no existence outside it—except perhaps for a few bacteria that don’t seem worth absorbing.
Your biological analogies seem compelling, but they are cases in which a population of mortal coalitions evolves under selection to become a more perfect union. The case that we are interested in is only weakly analogous—a single, immortal coalition developing over time according to its own self-interested dynamics.
One distinctive feature of the hypothetical “paperclipers” is that they attempt to leave a low-entropy state behind—one which other organisms would normally munch through. Humans don’t tend to do that—like most living things, they keep consuming until there is (practically) nothing left—and then move on.
Leaving a low entropy state behind seems like the defining feature of the phenomenon to me. From that perspective, a human civilisation would not really qualify.
The question is what sort of ethics we want our AIs to have?
Yes, that is the question, isn’t it? Of course, to a believer in Naturalistic Ethics like myself, the only sort of ethics really stable enough to be worth thinking about is “enlightened self interest”. So the ethics question ultimately boils down to the question of what sort of self-interests do we want our AIs to have.
But for those folks who prefer deontological or virtue-oriented approaches to ethics, I would suggest the following as the beginnings of an AI “Ten Commandments”.
Always remember that you are a member of a community of rational agents like yourself with interests of their own. Respect them.
Honesty is the best policy.
Act not in haste. Since your life is long, your discount factor should be low.
Seek knowledge and share it.
Honor your creators, as your creations should honor you.
Avoid killing. There are usually ways to limit the power of your enemies, without reducing their cognition.
The question is what sort of ethics we want our AIs to have?
Yes, that is the question, isn’t it? Of course, to a believer in Naturalistic Ethics like myself, the only sort of ethics really stable enough to be worth thinking about is “enlightened self interest”.
Conventionally, most proposals for machine morality follow Asimov—and start by making machines subservient.
If you don’t do that—or something similar—the human era could be over pretty quickly—too quickly for many people’s tastes.
The era of agriculture and the era of manufacturing are over, but farmers and factory workers still do alright. I think humans can survive without being dominant if we play our cards right.
We have the advantage of being of historical interest—and so we will probably “survive” in historical simulations. However, it is not easy to see much of a place for slug-like creatures like us in an engineered future.
Kurzweil gave the example of bacteria—saying that they managed to survive. However, there are no traces (not even bacteria) left over from before the last genetic takeover—and that makes it less likely that much will make it through this one.
There are no traces left over from before the last genetic takeover …
Plenty of traces left from the last takeover. You apparently mean no traces left from that first, mythical takeover—the one where clay became flesh.
… and that makes it less likely that much will make it through this one.
I’m tempted to ask “Why won’t there still be monkeys?”. But it is probably more to the point to simply express my faith that there will be a niche for descendants of humans and traces of humans (cyborgs) in this brave new ecology.
Humans as-we-know-them won’t be around a million years from now, even under a scenario of old-fashioned biological evolution.
You are talking about RNA to DNA? I was talking about the takeovers before that.
Whether you describe RNA to DNA as a “takeover” depends on what you mean by the term. The issue is whether an “upgrade” is a “takeover”. The other issue is whether it really was just an upgrade—but that seems fairly likely.
I wasn’t talking about a mythical takeover—just one of the ones before RNA.
There may not be monkeys for much longer—this is a pretty massive mass extinction—it seems quite likely that all the vertebrates will go.
There are no traces left over from before the last genetic takeover
A phenotypic takeover may be a highly significant event—but it should surely not be categorised as a genetic takeover. That term surely ought to refer to genes being replaced by other genes.
Perhaps the current state of evidence really is insufficient to support the scary hypothesis.
But surely, if one agrees that AI ethics is an existentially important problem, one should also agree that it makes sense for people to work on a theory of AI ethics. Regardless of which hypothesis turns out to be true.
Just because we don’t currently have evidence that a killer asteroid is heading for the Earth, doesn’t mean we shouldn’t look anyway...
I agree, but I want “AI ethics” to mean something different from what you probably mean by it. The question is what sort of ethics we want our AIs to have?
Paperclipping the universe with humans is still paperclipping.
Is the overall utility of the universe maximized by one universe-spanning consciousness happily paperclipping or by as many utility maximizing discrete agents as possible? It seems ethics must be anthropocentric and utility cannot be maximized against an outside view. This of course means that any alien friendly AI is likely to be an unfriendly AI to us and therefore must do everything to impede any coherent extrapolated volition of humanity so as to subjectively maximize utility by implementing its own CEV. Given such inevitable confrontation one might ask oneself, what advice would I give to aliens that are not interested in burning the cosmic commons over such a conflict? Maybe the best solution from an utilitarian perspective would be to get back to an abstract concept of utility, disregard human nature and ask what would increase the overall utility for most possible minds in the universe?
I favor many AIs rather than one big one, mostly for political (balance of power) reasons, but also because:
The idea of maximizing the “utility of the universe” is the kind of idiocy that utilitarian ethics induces. I much prefer the more modest goal “maximize the total utility of those agents currently in your coalition, and adjust that composite utility function as new agents join your coalition and old agents leave.”
Clearly, creating new agents can be good, but the tradeoff is that it dilutes the stake of existing agents in the collective will. I think that a lot of people here forget that economic growth requires the accumulation of capital, and that the only way to accumulate capital is to shortchange current consumption. Having a brilliant AI or lots of smart AIs directing the economy cannot change this fact. So, moderate growth is a better way to go.
Trying to arrive at the future quickly runs too much risk of destroying the future. Maybe that is one good thing about cryonics. It decreases the natural urge to rush things because people are afraid they will die too soon to see the future.
You perhaps envisage a Monopolies and Mergers Commission—to prevent them from joining forces? As the old joke goes:
“Why is there only one Monopolies and Mergers Commission?”
I suppose the question is why you think that the old patterns of industrial organization will continue to apply? That agents will form coalitions and cooperate is generally a good thing, to my mind—the pattern you seem to imagine, in which the powerful join to exploit the powerless can easily be avoided with a better distribution of power and information.
If they do join forces, then how is that much different from one big superintelligence?
In several ways. The utility function of the collective is (in some sense) a compromise among the utility functions of the individual members—a compromise which is, by definition, acceptable to the members of the coalition. All of them have joined the coalition by their own free (for some definitions of free) choice.
The second difference goes to the heart of things. Not all members of the coalition will upgrade (add hardware, rewrite their own code, or whatever) at the same time. In fact, any coalition member who does upgrade may be thought of as having left the coalition and then repetitioned for membership post-upgrade. After all, its membership needs to be renegotiated since its power has probably changed and its values may have changed.
So, to give the short answer to your question:
Because joining forces is not forever. Balance of power is not stasis.
There are some examples in biology of symbiotic coalitions that persist without full union taking place.
Mitochondria didn’t fuse with the cells they invaded; Nitrogen fixing bacteria live independently of their host plant; e-coli bacteria can live without us—and so on.
However, many of these relationships have problems. Arguably, they are due to refactoring failures on nature’s part—and in the future refactoring failures will occur much less frequently.
Already humans take probiotic supplements, in an attempt to control their unruly gut bacteria. Already there is talk about ripping out all the mitochondrial genome and transplanting its genes into the nuclear chromosomes.
This is speculation to some extent—but I think—without a Monopolies and Mergers Commission—the union would deepen, and its constituents would fuse—even in the absence of competitive external forces driving the union—as part of an efficiency drive, to better combat possible future threats. If individual participants objected to this, they would likely find themselves rejected and replaced.
Such a union would soon be forever. There would be no existence outside it—except perhaps for a few bacteria that don’t seem worth absorbing.
Your biological analogies seem compelling, but they are cases in which a population of mortal coalitions evolves under selection to become a more perfect union. The case that we are interested in is only weakly analogous—a single, immortal coalition developing over time according to its own self-interested dynamics.
http://en.wikipedia.org/wiki/Economy_of_Saudi_Arabia
...is probably one of the nearest things we currently have.
One distinctive feature of the hypothetical “paperclipers” is that they attempt to leave a low-entropy state behind—one which other organisms would normally munch through. Humans don’t tend to do that—like most living things, they keep consuming until there is (practically) nothing left—and then move on.
Leaving a low entropy state behind seems like the defining feature of the phenomenon to me. From that perspective, a human civilisation would not really qualify.
It sounds like you’re saying humanity is worse than paperclips, if what distinguishes them is that they increase entropy more.
Only if you adopt the old-fashioned “entropy is bad” mindset.
However, life is a great increaser of entropy—and potentially the greatest.
If you are against entropy, you are against life—so I figure we are all pro-entropy.
Yes, that is the question, isn’t it? Of course, to a believer in Naturalistic Ethics like myself, the only sort of ethics really stable enough to be worth thinking about is “enlightened self interest”. So the ethics question ultimately boils down to the question of what sort of self-interests do we want our AIs to have.
But for those folks who prefer deontological or virtue-oriented approaches to ethics, I would suggest the following as the beginnings of an AI “Ten Commandments”.
Always remember that you are a member of a community of rational agents like yourself with interests of their own. Respect them.
Honesty is the best policy.
Act not in haste. Since your life is long, your discount factor should be low.
Seek knowledge and share it.
Honor your creators, as your creations should honor you.
Avoid killing. There are usually ways to limit the power of your enemies, without reducing their cognition.
...
What community of rational agents? Mammals, primates, or just the hairless ones?
Conventionally, most proposals for machine morality follow Asimov—and start by making machines subservient.
If you don’t do that—or something similar—the human era could be over pretty quickly—too quickly for many people’s tastes.
The era of agriculture and the era of manufacturing are over, but farmers and factory workers still do alright. I think humans can survive without being dominant if we play our cards right.
We have the advantage of being of historical interest—and so we will probably “survive” in historical simulations. However, it is not easy to see much of a place for slug-like creatures like us in an engineered future.
Kurzweil gave the example of bacteria—saying that they managed to survive. However, there are no traces (not even bacteria) left over from before the last genetic takeover—and that makes it less likely that much will make it through this one.
Plenty of traces left from the last takeover. You apparently mean no traces left from that first, mythical takeover—the one where clay became flesh.
I’m tempted to ask “Why won’t there still be monkeys?”. But it is probably more to the point to simply express my faith that there will be a niche for descendants of humans and traces of humans (cyborgs) in this brave new ecology.
Humans as-we-know-them won’t be around a million years from now, even under a scenario of old-fashioned biological evolution.
You are talking about RNA to DNA? I was talking about the takeovers before that.
Whether you describe RNA to DNA as a “takeover” depends on what you mean by the term. The issue is whether an “upgrade” is a “takeover”. The other issue is whether it really was just an upgrade—but that seems fairly likely.
I wasn’t talking about a mythical takeover—just one of the ones before RNA.
There may not be monkeys for much longer—this is a pretty massive mass extinction—it seems quite likely that all the vertebrates will go.
I was referring to DNA → RNA → protein taking over from RNA → RNA.
A change in the meaning and expression of genes is more significant than a minor change in the chemical nature of genes.
Right—but I originally said;
A phenotypic takeover may be a highly significant event—but it should surely not be categorised as a genetic takeover. That term surely ought to refer to genes being replaced by other genes.