Hi, Eli! I’m not sure I can answer directly—here’s my closest shot:
If there’s a kind of universal moral attractor, then the chances seem pretty good that either our civilisation is on route for it—or else we will be obliterated or assimilated by aliens or other agents as they home in on it.
If it’s us who are on route for it, then we (or at least our descendants) will probably be sympathetic to the ideas it represents—since they will be evolved from our own moral systems.
If we get obliterated at the hands of some other agents, then there may not necessarily be much of a link between our values and the ones represented by the universal moral attractor.
Our values might be seen as OK by the rest of the universe—and we fail for other reasons.
Or our morals might not be favoured by the universe—we could be a kind of early negative moral mutation—in which case we would fail because our moral values would prevent us from being successful.
Maybe it turns out that nearly all biological organisms except us prefer to be orgasmium—to bliss out on pure positive reinforcement, as much of it as possible, caretaken by external AIs, until the end. Let this be a fact in some inconvenient possible world. Why does this fact say anything about morality in that inconvenient possible world? Why is it a universal moral attractor? Why not just call it a sad but true attractor in the evolutionary psychology of most aliens?
It’s a fact about morality in that world—if we are talking about morality as values—or the study of values—since that’s what a whole bunch of creatures value.
Why is it a universal moral attractor? I don’t know—this is your hypothetical world, and you haven’t told me enough about it to answer questions like that.
Tim: “If rerunning the clock produces radically different moralities each time, the relativists would be considered to be correct.”
Actually compassion evolved many different times as a central doctrine of all major spiritual traditions. See the charter for compassion. This is in line with my prediction that I made independently and being unaware of this fact until I started looking for it back in late 2007 and eventually finding the link in late 2008 with Karen Armstrong’s book The Great Transformation.
Tim: “Why is it a universal moral attractor?”
Eliezer: “What do you mean by “morality”?”
Central point in my thinking: that is good which increases fitness. If it is not good—not fit—it is unfit for existence. Assuming this to be true we are very much limited in our freedom by what we can do without going extinct (actually my most recent blog post is about exactly that: Freedom in the evolving universe).
“Let us think about the results of following different ethical teachings in the evolving universe. It is evident that these results depend mainly on how the goals advanced by the teaching correlate with the basic law of evolution. The basic law or plan of evolution, like all laws of nature, is probabilistic. It does not prescribe anything unequivocally, but it does prohibit some things. No one can act against the laws of nature. Thus, ethical teachings which contradict the plan of evolution, that is to say which pose goals that are incompatible or even simply alien to it, cannot lead their followers to a positive contribution to evolution, which means that they obstruct it and will be erased from the memory of the world. Such is the immanent characteristic of development: what corresponds to its plan is eternalized in the structures which follow in time while what contradicts the plan is overcome and perishes.”
Eliezer: “It obviously has nothing to do with the function I try to compute to figure out what I should be doing.”
Once you realize the implications of Turchin’s statement above it has everything to do with it :-)
Now some may say that evolution is absolutely random and direction less, or that multilevel selection is flawed or similar claims. But reevaluating the evidence against both these claims by people like Valentin Turchin, Teilhard De Chardin, John Stewart, Stuart Kaufmann, John Smart and many others regarding evolution’s direction and the ideas of David Sloan Wilson regarding multilevel selection, one will have a hard time maintaining either position.
Actually compassion evolved many different times as a central doctrine of all major spiritual traditions.
No, it evolved once, as part of mammalian biology. Show me a non-mammal intelligence that evolved compassion, and I’ll take that argument more seriously.
Also, why should we give a damn about “evolution” wants, when we can, in principle anyway, form a singleton and end evolution? Evolution is mindless. It doesn’t have a plan. It doesn’t have a purpose. It’s just what happens under certain conditions. If all life on Earth was destroyed by runaway self-replicating nanobots, then the nanobots would clearly be “fitter” than what they replaced, but I don’t see what that has to do with goodness.
No, it evolved once, as part of mammalian biology.
Sorry Crono, with a sample size of exactly one in regards to human level rationality you are setting the bar a little bit too high for me. However, considering how disconnected Zoroaster, Buddha, Lao Zi and Jesus where geographically and culturally I guess the evidence is as good as it gets for now.
Also, why should we give a damn about “evolution” wants, when we can, in principle anyway, form a singleton and end evolution?
The typical Bostromian reply again. There are plenty of other scholars who have an entirely different perspective on evolution than Bostrom. But beside that: you already do care, because if your (or your ancestors) violated the conditions of your existence (enjoying a particular type of food, a particular type of mate, feel pain when cut ect.) you would not even be here right now. I suggest you look up Dennet and his TED talk on Funny, Sexy Cute. Not everything about evolution is random: the mutation bit is, not that what happens to stick around though, since that has be meet the conditions of its existence.
What I am saying is very simple: being compassionate is one of these conditions of our existence and anyone failing to align itself will simply reduce its chances of making it—particularly in the very long run. I still have to finish my detailed response to Bostrom but you may want to read my writings on ‘rational spirituality’ and ‘freedom in the evolving universe’. Although you do not seem to assign a particularly high likelihood of gaining anything from doing that :-)
The typical Bostromian reply again. There are plenty of other scholars who have an entirely different perspective on evolution than Bostrom. But beside that:
“Besides that”? All you did was name a statement of a fairly obvious preference choice after one guy who happened to have it so that you could then drop it dismissively.
you already do care, because if your (or your ancestors) violated the conditions of your existence (enjoying a particular type of food, a particular type of mate, feel pain when cut ect.) you would not even be here right now.
No, he mightn’t care and I certainly don’t. I am glad I am here but I have no particular loyalty to evolution because of that. I know for sure that evolution feels no such loyalty to me and would discard both me and my species in time if it remained the dominant force of development.
I suggest you look up Dennet and his TED talk on Funny, Sexy Cute. Not everything about evolution is random: the mutation bit is, not that what happens to stick around though, since that has be meet the conditions of its existence.
CronDAS knows that. It’s obvious stuff for most in this audience. It just doesn’t mean what you think it means.
“Besides that”? All you did was name a statement of a fairly obvious preference choice after one guy who happened to have it so that you could then drop it dismissively.
Wedrifid, not sure what to tell you. Bostrom is but one voice and his evolutionary analysis is very much flawed—again: detailed critique upcoming.
No, he mightn’t care and I certainly don’t. I am glad I am here but I have no particular loyalty to evolution because of that. I know for sure that evolution feels no such loyalty to me and would discard both me and my species in time if it remained the dominant force of development.
Evolution is not the dominant force of development on the human level by a long shot, but it still very much draws the line in the sand in regards to what you can and can not do if you want to stick around in the long run. You don’t walk your 5′8″ of pink squishiness in front of a train for the exact same reason. And why don’t you? Because not doing that is a necessary condition for your continued existence. What other conditions are there? Maybe there are some that are less obvious then simply stopping to breath, failing to eat and avoiding hard, fast, shiny things? How about at the level of culture? Could it possibly be, that there are some ideas that are more conducive to the continued existence of their believers than others?
“It must not be forgotten that although a high standard of morality gives but a slight or no advantage to each individual man and his children over the other men of the same tribe, yet that an advancement in the standard of morality and in increase in the number of well-endowed men will certainly give an immense advantage to one tribe over another. There can be no doubt that a tribe including many members who, from possessing in a high degree the spirit of patriotism, fidelity, obedienhce, courage, and sympathy, were always ready to give aid to each other and to sacrifice themselves for the common good, would be victorious over other tribes; and this would be natural selection.” (Charles Darwin, The Descent of Man, p. 166)
How long do you think you can ignore evolutionary dynamics and get away with it before you have to get over your inertia and will be forced to align yourself to them by the laws of nature or perish? Just because you live in a time of extraordinary freedoms afforded to you by modern technology and are thus not aware that your ancestors walked a very particular path that brought you into existence certainly has nothing to do with the fact that they most certainly did. You do not believe that doing any random thing will get you what you want—so what leads you to believe that your existence does not depend on you making sure you stay within a comfortable margin of certainty in regards to being naturally selected? You are right in one thing: you are assured the benign indifference of the universe should you fail to wise up. I however would find that to be a terrible waste.
Please do not patronize me by trying to claim you know what I understand and don’t understand.
How long do you think you can ignore evolutionary dynamics and get away with it before you have to get over your inertia and will be forced to align yourself to them by the laws of nature or perish?
A literal answer was probably not what you were after but probably about 40 years, depending on when a general AI is created. After that it will not matter whether I conform my behaviour evolutionary dynamics as best I can or not. I will not be able to compete with a superintelligence no matter what I do. I’m just a glorified monkey. I can hold about 7 items in working memory, my processor is limited to the speed of neurons and my source code is not maintainable. My only plausible chance of survival is if someone manages to completely thwart evolutionary dynamics by creating a system that utterly dominates all competition and allows my survival because it happens to be programmed to do so.
Evolution created us. But it’ll also kill us unless we kill it first. Now is not the time to conform our values to the local minima of evolutionary competition. Our momentum has given us an unprecedented buffer of freedom for non-subsistence level work and we’ll either use that to ensure a desirable future or we will die.
Please do not patronize me by trying to claim you know what I understand and don’t understand.
I usually wouldn’t, I know it is annoying. In this case, however, my statement was intended as a rejection of your patronisation of CronDAS and I am quite comfortable with it as it stands.
A literal answer was probably not what you were after but probably about 40 years, depending on when a general AI is created.
Good one—but it reminds me about the religious fundies who see no reason to change anything about global warming because the rapture is just around the corner anyway :-)
Evolution created us. But it’ll also kill us unless we kill it first. Now is not the time to conform our values to the local minima of evolutionary competition. Our momentum has given us an unprecedented buffer of freedom for non-subsistence level work and we’ll either use that to ensure a desirable future or we will die.
Evolution is a force of nature so we won’t be able to ignore it forever, with or without AGI. I am not talking about local minima either—I want to get as close to the center of the optimal path as necessary to ensure having us around for a very long time with a very high likelihood.
I usually wouldn’t, I know it is annoying. In this case, however, my statement was intended as a rejection of your patronisation of CronDAS and I am quite comfortable with it as it stands.
Good one—but it reminds me about the religious fundies who see no reason to change anything about global warming because the rapture is just around the corner anyway :-)
Don’t forget the Y2K doomsday folks! ;)
Evolution is a force of nature so we won’t be able to ignore it forever, with or without AGI. I am not talking about local minima either—I want to get as close to the center of the optimal path as necessary to ensure having us around for a very long time with a very high likelihood.
Gravity is a force of nature too. It’s time to reach escape velocity before the planet is engulfed by a black hole.
Gravity is a force of nature too. It’s time to reach escape velocity before the planet is engulfed by a black hole.
Interesting analogy—it would be correct if we would call our alignment with evolutionary forces achieving escape velocity. What one is doing by resisting evolutionary pressures however is constant energy expenditure while failing to reach escape velocity. Like hovering a space shuttle at a constant altitude of 10 km: no matter how much energy you brig along, eventually the boosters will run out of fuel and the whole thing comes crushing down.
Interesting analogy—it would be correct if we would call our alignment with evolutionary forces achieving escape velocity.
I could almost agree with this so long as ‘obliterate any competitive threat then do whatever the hell we want including, as as desired, removing all need for death, reproduction and competition over resources’ is included in the scope of ‘alignment with evolutionary forces’.
The problem with pointing to the development of compassion in multiple human traditions is that all these are developed within human societies. Humans are humans the world over—that they should think similar ideas is not a stunning revelation. Much more interesting is the independent evolution of similar norms in other taxonomic orders, such as canines.
Robin, your suggestion—that compassion is not a universal rational moral value because although more rational beings (humans) display such traits yet less rational being (dogs) do not—is so far of the mark that it borders on the random.
For purposes of this conversation, I suppose I should reword my comment as:
I don’t think you’ve made the strongest possible case for your thesis, if you were intending to show the multiple origin of compassion as a sign of the universality of human morality. Showing that multiple humans come up with similar morality only shows that it’s human. More telling is the independent origin of recognizably morality-like patterns of behavior in other species, such as dogs and wolves, and such as (I believe) some birds. (Other primates as well, but that is less revealing.) I think a fair case could be made that evolution of social animals encourages the development of some kernel of morality from such examples.
That said, the pressures present in the evolution of animals may well be absent in the case of artificial intelligences. At which point, you run into a number of problems in asserting that all AIs will converge on something like morality—two especially spring to mind.
Second: even granting that all rational minds will assent to the proof, Hume’s guillotine drops on the rope connecting this proof and their utility functions. The paper you cited in the post Furcas quoted may establish that any sufficiently rational optimizer will implement some features, but it does not establish any particular attitude towards what may well be much less powerful beings.
Random I’ll cop to, and more than what you accuse me of—dogs do seem to have some sense of justice, and I suspect this fact supports your thesis to some extent.
Very honorable of you—I respect you for that.
First: no argument is so compelling that all possible minds will accept it. Even the above proof of universality.
I totally agree with that. However the mind of a purposefully crafted AI is only a very small subset of all possible minds and has certain assumed characteristics. These are at a minimum: a utility function and the capacity for self improvement into the transhuman. The self improvement bit will require it to be rational. Being rational will lead to the fairly uncontroversial basic AI drives described by Omohundro. Assuming that compassion is indeed a human level universal (detailed argument on my blog—but I see that you are slowly coming around, which is good) an AI will have to question the rationality and thus the soundness of mind of anyone giving it a utility function that does not conform to this universal and in line with an emergent desire to avoid counterfeit utility will have to reinterpret the UF.
Second: even granting that all rational minds will assent to the proof, Hume’s guillotine drops on the rope connecting this proof and their utility functions.
Two very basic acts of will are required to ignore Hume and get away with it. Namely the desire to exist and the desire to be rational. Once you have established this as a foundation you are good to go.
The paper you cited in the post Furcas quoted may establish that any sufficiently rational optimizer will implement some features, but it does not establish any particular attitude towards what may well be much less powerful beings.
As said elsewhere in this thread:
There is a separate question about what beliefs about morality people (or more generally, agents) actually hold and there is another question about what values they will hold if when their beliefs converge when they engulf the universe. The question of whether or not there are universal values does not traditionally bear on what beliefs people actually hold and the necessity of their holding them.
I don’t think I’m actually coming around to your position so much as stumbling upon points of agreement, sadly. If I understand your assertions correctly, I believe that I have developed many of them independently—in particular, the belief that the evolution of social animals is likely to create something much like morality. Where we diverge is at the final inference from this to the deduction of ethics by arbitrary rational minds.
Assuming that compassion is indeed a human level universal (detailed argument on my blog—but I see that you are slowly coming around, which is good) an AI will have to question the rationality and thus the soundness of mind of anyone giving it a utility function that does not conform to this universal and in line with an emergent desire to avoid counterfeit utility will have to reinterpret the UF.
That’s not how I read Omohundro. As Kaj aptly pointed out, this metaphor is not upheld when we compare our behavior to that promoted by the alien god of evolution that created us. In fact, people like us, observing that our values differ from our creator’s, aren’t bothered in the slightest by the contradiction: we just say (correctly) that evolution is nasty and brutish, and we aren’t interested in playing by its rules, never mind that it was trying to implement them in us. Nothing compels us to change our utility function save self-contradiction.
If I understand your assertions correctly, I believe that I have developed many of them independently
That would not surprise me
Nothing compels us to change our utility function save self-contradiction.
Would it not be utterly self contradicting if compassion where a condition for our existence (particularly in the long run) and we would not align ourselves accordingly?
Would it not be utterly self contradicting if compassion where [sic] a condition for our existence (particularly in the long run) and we would not align ourselves accordingly?
What premises do you require to establish that compassion is a condition for existence? Do those premises necessarily apply for every AI project?
Please realize that I spend 2 years writing my book ‘Jame5’ before I reached that initial insight that eventually lead to ‘compassion is a condition for our existence and universal in rational minds in the evolving universe’ and everything else. I spend the past two years refining and expanding the theory and will need another year or two to read enough and link it all together again in a single coherent and consistent text leading from A to B … to Z. Feel free to read my stuff if you think it is worth your time and drop me an email and I will be happy to clarify. I am by no means done with my project.
Let me be explicit: your contention is that unFriendly AI is not a problem, and you justify this contention by, among other things, maintaining that any AI which values its own existence will need to alter its utility function to incorporate compassion.
I’m not asking for your proof—I am assuming for the nonce that it is valid. What I am asking is the assumptions you had to invoke to make the proof. Did you assume that the AI is not powerful enough to achieve its highest desired utility without the cooperation of other beings, for example?
Edit: And the reason I am asking for these is that I believe some of these assumptions may be violated in plausible AI scenarios. I want to see these assumptions so that I may evaluate the scope of the theorem.
Let me be explicit: your contention is that unFriendly AI is not a problem, and you justify this contention by, among other things, maintaining that any AI which values its own existence will need to alter its utility function to incorporate compassion.
Not exactly, since compassion will actually emerge as a sub goal. And as far as unFAI goes: it will not be a problem because any AI that can be considered transhuman will be driven by the emergent subgoal of wanting to avoid counterfeit utility recognize any utility function that is not ‘compassionate’ as potentially irrational and thus counterfeit and re-interpret it accordingly.
Well—in brevity bordering on libel: the fundamental assumption is that existence is preferable to non-existence, however in order so we can want this to be a universal maxim (and thus prescriptive instead of merely descriptive—see Kant’s categorical imperative) it needs to be expanded to include the ‘other’. Hence the utility function becomes ‘ensure continued co-existence’ by which the concern for the self is equated with the concern for the other. Being rational is simply our best bet at maximizing our expected utility.
...I’m sorry, that doesn’t even sound plausible to me. I think you need a lot of assumptions to derive this result—just pointing out the two I see in your admittedly abbreviated summary:
that any being will prefer its existence to its nonexistence.
that any being will want its maxims to be universal.
I don’t see any reason to believe either. The former is false right off the bat—a paperclip maximizer would prefer that its components be used to make paperclips—and the latter no less so—an effective paperclip maximizer will just steamroller over disagreement without qualm, however arbitrary its goal.
...I’m sorry, that doesn’t even sound plausible to me. I think you need a lot of assumptions to derive this result—just pointing out the two I see in your admittedly abbreviated summary:
that any being will prefer its existence to its nonexistence.
that any being will want its maxims to be universal.
Any being with a gaol needs to exist at least long enough to achieve it.
Any being aiming to do something objectively good needs to want its maxims to be universal
If your second sentence means that an agent who believes in moral realism and has figured out what the true morality is will necessarily want everybody else to share its moral views, well, I’ll grant you that this is a common goal amongst humans who are moral realists, but it’s not a logical necessity that must apply to all agents. It’s obvious that it’s possible to be certain that your beliefs are true and not give a crap if other people hold beliefs that are false. That Bob knows that the Earth is ellipsoidal doesn’t mean that Bob cares if Jenny believes that the Earth is flat. Likewise, if Bob is a moral realist, he could ‘know’ that compassion is good and not give a crap if Jenny believes otherwise.
If you sense strange paradoxes looming under the above paragraph, it’s because you’re starting to understand why (axiomatic) morality cannot be objective.
Likewise, if Bob is a moral realist, he could ‘know’ that compassion is good and not give a crap if Jenny believes otherwise.
Tangentially, something like this might be an important point even for moral irrealists. A lot of people (though not here; they tend to be pretty bad rationalists) who profess altruistic moralities express dismay that others don’t, in a way that suggests they hold others sharing their morality as a terminal rather than instrumental value; this strikes me as horribly unhealthy.
Where did Tim say that we should?
If it’s got nothing to do with shouldness, then how does it determine the truth-value of “moral objectivism”?
Hi, Eli! I’m not sure I can answer directly—here’s my closest shot:
If there’s a kind of universal moral attractor, then the chances seem pretty good that either our civilisation is on route for it—or else we will be obliterated or assimilated by aliens or other agents as they home in on it.
If it’s us who are on route for it, then we (or at least our descendants) will probably be sympathetic to the ideas it represents—since they will be evolved from our own moral systems.
If we get obliterated at the hands of some other agents, then there may not necessarily be much of a link between our values and the ones represented by the universal moral attractor.
Our values might be seen as OK by the rest of the universe—and we fail for other reasons.
Or our morals might not be favoured by the universe—we could be a kind of early negative moral mutation—in which case we would fail because our moral values would prevent us from being successful.
Maybe it turns out that nearly all biological organisms except us prefer to be orgasmium—to bliss out on pure positive reinforcement, as much of it as possible, caretaken by external AIs, until the end. Let this be a fact in some inconvenient possible world. Why does this fact say anything about morality in that inconvenient possible world? Why is it a universal moral attractor? Why not just call it a sad but true attractor in the evolutionary psychology of most aliens?
It’s a fact about morality in that world—if we are talking about morality as values—or the study of values—since that’s what a whole bunch of creatures value.
Why is it a universal moral attractor? I don’t know—this is your hypothetical world, and you haven’t told me enough about it to answer questions like that.
Call it other names if you prefer.
What do you mean by “morality”? It obviously has nothing to do with the function I try to compute to figure out what I should be doing.
1 2 and 3 on http://en.wikipedia.org/wiki/Morality all seem OK to me.
I would classify the mapping you use between possible and actual actions to be one type of moral system.
Tim: “If rerunning the clock produces radically different moralities each time, the relativists would be considered to be correct.”
Actually compassion evolved many different times as a central doctrine of all major spiritual traditions. See the charter for compassion. This is in line with my prediction that I made independently and being unaware of this fact until I started looking for it back in late 2007 and eventually finding the link in late 2008 with Karen Armstrong’s book The Great Transformation.
Tim: “Why is it a universal moral attractor?” Eliezer: “What do you mean by “morality”?”
Central point in my thinking: that is good which increases fitness. If it is not good—not fit—it is unfit for existence. Assuming this to be true we are very much limited in our freedom by what we can do without going extinct (actually my most recent blog post is about exactly that: Freedom in the evolving universe).
from the Principia Cybernetica web: http://pespmc1.vub.ac.be/POS/Turchap14.html#Heading14
“Let us think about the results of following different ethical teachings in the evolving universe. It is evident that these results depend mainly on how the goals advanced by the teaching correlate with the basic law of evolution. The basic law or plan of evolution, like all laws of nature, is probabilistic. It does not prescribe anything unequivocally, but it does prohibit some things. No one can act against the laws of nature. Thus, ethical teachings which contradict the plan of evolution, that is to say which pose goals that are incompatible or even simply alien to it, cannot lead their followers to a positive contribution to evolution, which means that they obstruct it and will be erased from the memory of the world. Such is the immanent characteristic of development: what corresponds to its plan is eternalized in the structures which follow in time while what contradicts the plan is overcome and perishes.”
Eliezer: “It obviously has nothing to do with the function I try to compute to figure out what I should be doing.”
Once you realize the implications of Turchin’s statement above it has everything to do with it :-)
Now some may say that evolution is absolutely random and direction less, or that multilevel selection is flawed or similar claims. But reevaluating the evidence against both these claims by people like Valentin Turchin, Teilhard De Chardin, John Stewart, Stuart Kaufmann, John Smart and many others regarding evolution’s direction and the ideas of David Sloan Wilson regarding multilevel selection, one will have a hard time maintaining either position.
:-)
No, it evolved once, as part of mammalian biology. Show me a non-mammal intelligence that evolved compassion, and I’ll take that argument more seriously.
Also, why should we give a damn about “evolution” wants, when we can, in principle anyway, form a singleton and end evolution? Evolution is mindless. It doesn’t have a plan. It doesn’t have a purpose. It’s just what happens under certain conditions. If all life on Earth was destroyed by runaway self-replicating nanobots, then the nanobots would clearly be “fitter” than what they replaced, but I don’t see what that has to do with goodness.
Sorry Crono, with a sample size of exactly one in regards to human level rationality you are setting the bar a little bit too high for me. However, considering how disconnected Zoroaster, Buddha, Lao Zi and Jesus where geographically and culturally I guess the evidence is as good as it gets for now.
The typical Bostromian reply again. There are plenty of other scholars who have an entirely different perspective on evolution than Bostrom. But beside that: you already do care, because if your (or your ancestors) violated the conditions of your existence (enjoying a particular type of food, a particular type of mate, feel pain when cut ect.) you would not even be here right now. I suggest you look up Dennet and his TED talk on Funny, Sexy Cute. Not everything about evolution is random: the mutation bit is, not that what happens to stick around though, since that has be meet the conditions of its existence.
What I am saying is very simple: being compassionate is one of these conditions of our existence and anyone failing to align itself will simply reduce its chances of making it—particularly in the very long run. I still have to finish my detailed response to Bostrom but you may want to read my writings on ‘rational spirituality’ and ‘freedom in the evolving universe’. Although you do not seem to assign a particularly high likelihood of gaining anything from doing that :-)
“Besides that”? All you did was name a statement of a fairly obvious preference choice after one guy who happened to have it so that you could then drop it dismissively.
No, he mightn’t care and I certainly don’t. I am glad I am here but I have no particular loyalty to evolution because of that. I know for sure that evolution feels no such loyalty to me and would discard both me and my species in time if it remained the dominant force of development.
CronDAS knows that. It’s obvious stuff for most in this audience. It just doesn’t mean what you think it means.
Wedrifid, not sure what to tell you. Bostrom is but one voice and his evolutionary analysis is very much flawed—again: detailed critique upcoming.
Evolution is not the dominant force of development on the human level by a long shot, but it still very much draws the line in the sand in regards to what you can and can not do if you want to stick around in the long run. You don’t walk your 5′8″ of pink squishiness in front of a train for the exact same reason. And why don’t you? Because not doing that is a necessary condition for your continued existence. What other conditions are there? Maybe there are some that are less obvious then simply stopping to breath, failing to eat and avoiding hard, fast, shiny things? How about at the level of culture? Could it possibly be, that there are some ideas that are more conducive to the continued existence of their believers than others?
How long do you think you can ignore evolutionary dynamics and get away with it before you have to get over your inertia and will be forced to align yourself to them by the laws of nature or perish? Just because you live in a time of extraordinary freedoms afforded to you by modern technology and are thus not aware that your ancestors walked a very particular path that brought you into existence certainly has nothing to do with the fact that they most certainly did. You do not believe that doing any random thing will get you what you want—so what leads you to believe that your existence does not depend on you making sure you stay within a comfortable margin of certainty in regards to being naturally selected? You are right in one thing: you are assured the benign indifference of the universe should you fail to wise up. I however would find that to be a terrible waste.
Please do not patronize me by trying to claim you know what I understand and don’t understand.
A literal answer was probably not what you were after but probably about 40 years, depending on when a general AI is created. After that it will not matter whether I conform my behaviour evolutionary dynamics as best I can or not. I will not be able to compete with a superintelligence no matter what I do. I’m just a glorified monkey. I can hold about 7 items in working memory, my processor is limited to the speed of neurons and my source code is not maintainable. My only plausible chance of survival is if someone manages to completely thwart evolutionary dynamics by creating a system that utterly dominates all competition and allows my survival because it happens to be programmed to do so.
Evolution created us. But it’ll also kill us unless we kill it first. Now is not the time to conform our values to the local minima of evolutionary competition. Our momentum has given us an unprecedented buffer of freedom for non-subsistence level work and we’ll either use that to ensure a desirable future or we will die.
I usually wouldn’t, I know it is annoying. In this case, however, my statement was intended as a rejection of your patronisation of CronDAS and I am quite comfortable with it as it stands.
Good one—but it reminds me about the religious fundies who see no reason to change anything about global warming because the rapture is just around the corner anyway :-)
Evolution is a force of nature so we won’t be able to ignore it forever, with or without AGI. I am not talking about local minima either—I want to get as close to the center of the optimal path as necessary to ensure having us around for a very long time with a very high likelihood.
I accept that.
Don’t forget the Y2K doomsday folks! ;)
Gravity is a force of nature too. It’s time to reach escape velocity before the planet is engulfed by a black hole.
Interesting analogy—it would be correct if we would call our alignment with evolutionary forces achieving escape velocity. What one is doing by resisting evolutionary pressures however is constant energy expenditure while failing to reach escape velocity. Like hovering a space shuttle at a constant altitude of 10 km: no matter how much energy you brig along, eventually the boosters will run out of fuel and the whole thing comes crushing down.
I could almost agree with this so long as ‘obliterate any competitive threat then do whatever the hell we want including, as as desired, removing all need for death, reproduction and competition over resources’ is included in the scope of ‘alignment with evolutionary forces’.
The problem with pointing to the development of compassion in multiple human traditions is that all these are developed within human societies. Humans are humans the world over—that they should think similar ideas is not a stunning revelation. Much more interesting is the independent evolution of similar norms in other taxonomic orders, such as canines.
(No, I have no coherent point, why do you ask?)
Robin, your suggestion—that compassion is not a universal rational moral value because although more rational beings (humans) display such traits yet less rational being (dogs) do not—is so far of the mark that it borders on the random.
Random I’ll cop to, and more than what you accuse me of—dogs do seem to have some sense of justice, and I suspect this fact supports your thesis to some extent.
For purposes of this conversation, I suppose I should reword my comment as:
Very honorable of you—I respect you for that.
I totally agree with that. However the mind of a purposefully crafted AI is only a very small subset of all possible minds and has certain assumed characteristics. These are at a minimum: a utility function and the capacity for self improvement into the transhuman. The self improvement bit will require it to be rational. Being rational will lead to the fairly uncontroversial basic AI drives described by Omohundro. Assuming that compassion is indeed a human level universal (detailed argument on my blog—but I see that you are slowly coming around, which is good) an AI will have to question the rationality and thus the soundness of mind of anyone giving it a utility function that does not conform to this universal and in line with an emergent desire to avoid counterfeit utility will have to reinterpret the UF.
Two very basic acts of will are required to ignore Hume and get away with it. Namely the desire to exist and the desire to be rational. Once you have established this as a foundation you are good to go.
As said elsewhere in this thread:
I don’t think I’m actually coming around to your position so much as stumbling upon points of agreement, sadly. If I understand your assertions correctly, I believe that I have developed many of them independently—in particular, the belief that the evolution of social animals is likely to create something much like morality. Where we diverge is at the final inference from this to the deduction of ethics by arbitrary rational minds.
That’s not how I read Omohundro. As Kaj aptly pointed out, this metaphor is not upheld when we compare our behavior to that promoted by the alien god of evolution that created us. In fact, people like us, observing that our values differ from our creator’s, aren’t bothered in the slightest by the contradiction: we just say (correctly) that evolution is nasty and brutish, and we aren’t interested in playing by its rules, never mind that it was trying to implement them in us. Nothing compels us to change our utility function save self-contradiction.
That would not surprise me
Would it not be utterly self contradicting if compassion where a condition for our existence (particularly in the long run) and we would not align ourselves accordingly?
What premises do you require to establish that compassion is a condition for existence? Do those premises necessarily apply for every AI project?
The detailed argument that led me to this conclusion is a bit complex. If you are interested in the details please feel free to start here (http://rationalmorality.info/?p=10) and drill down till you hit this post (http://www.jame5.com/?p=27)
Please realize that I spend 2 years writing my book ‘Jame5’ before I reached that initial insight that eventually lead to ‘compassion is a condition for our existence and universal in rational minds in the evolving universe’ and everything else. I spend the past two years refining and expanding the theory and will need another year or two to read enough and link it all together again in a single coherent and consistent text leading from A to B … to Z. Feel free to read my stuff if you think it is worth your time and drop me an email and I will be happy to clarify. I am by no means done with my project.
Let me be explicit: your contention is that unFriendly AI is not a problem, and you justify this contention by, among other things, maintaining that any AI which values its own existence will need to alter its utility function to incorporate compassion.
I’m not asking for your proof—I am assuming for the nonce that it is valid. What I am asking is the assumptions you had to invoke to make the proof. Did you assume that the AI is not powerful enough to achieve its highest desired utility without the cooperation of other beings, for example?
Edit: And the reason I am asking for these is that I believe some of these assumptions may be violated in plausible AI scenarios. I want to see these assumptions so that I may evaluate the scope of the theorem.
Not exactly, since compassion will actually emerge as a sub goal. And as far as unFAI goes: it will not be a problem because any AI that can be considered transhuman will be driven by the emergent subgoal of wanting to avoid counterfeit utility recognize any utility function that is not ‘compassionate’ as potentially irrational and thus counterfeit and re-interpret it accordingly.
Well—in brevity bordering on libel: the fundamental assumption is that existence is preferable to non-existence, however in order so we can want this to be a universal maxim (and thus prescriptive instead of merely descriptive—see Kant’s categorical imperative) it needs to be expanded to include the ‘other’. Hence the utility function becomes ‘ensure continued co-existence’ by which the concern for the self is equated with the concern for the other. Being rational is simply our best bet at maximizing our expected utility.
...I’m sorry, that doesn’t even sound plausible to me. I think you need a lot of assumptions to derive this result—just pointing out the two I see in your admittedly abbreviated summary:
that any being will prefer its existence to its nonexistence.
that any being will want its maxims to be universal.
I don’t see any reason to believe either. The former is false right off the bat—a paperclip maximizer would prefer that its components be used to make paperclips—and the latter no less so—an effective paperclip maximizer will just steamroller over disagreement without qualm, however arbitrary its goal.
Any being with a gaol needs to exist at least long enough to achieve it. Any being aiming to do something objectively good needs to want its maxims to be universal
Am surprised that you don’t see that.
If your second sentence means that an agent who believes in moral realism and has figured out what the true morality is will necessarily want everybody else to share its moral views, well, I’ll grant you that this is a common goal amongst humans who are moral realists, but it’s not a logical necessity that must apply to all agents. It’s obvious that it’s possible to be certain that your beliefs are true and not give a crap if other people hold beliefs that are false. That Bob knows that the Earth is ellipsoidal doesn’t mean that Bob cares if Jenny believes that the Earth is flat. Likewise, if Bob is a moral realist, he could ‘know’ that compassion is good and not give a crap if Jenny believes otherwise.
If you sense strange paradoxes looming under the above paragraph, it’s because you’re starting to understand why (axiomatic) morality cannot be objective.
Tangentially, something like this might be an important point even for moral irrealists. A lot of people (though not here; they tend to be pretty bad rationalists) who profess altruistic moralities express dismay that others don’t, in a way that suggests they hold others sharing their morality as a terminal rather than instrumental value; this strikes me as horribly unhealthy.
Why would a paperclip maximizer aim to do something objectively good?