This post is a continuation of a discussion with Stefan Pernar—from another thread:
I think there’s something to an absolute morality. Or at least, some moralities are favoured by nature over other ones—and those are the ones we are more likely to see.
That doesn’t mean that there is “one true morality”—since different moral systems might be equally favoured—but rather that moral relativism is dubious—some moralities really are better than other ones.
There have been various formulations of the idea of a natural morality.
I do agree with them about one thing—and it’s this:
If it were possible to create a system—driven by self-directed evolution where natural selection played a subsidiary role—it might be possible to temporarily create what I call “handicapped superintelligences”:
With unobjectionable values I mean those that would not automatically and eventually lead to one’s extinction. Or more precisely: a utility function becomes irrational when it is intrinsically self limiting in the sense that it will eventually lead to ones inability to generate further utility. Thus my suggested utility function of ‘ensure continued co-existence’
This utility function seems to be the only one that does not end in the inevitable termination of the maximizer.
You keep making the same statements without integrating my previous arguments into your thinking yet fail to expose them as self contradicting or fallacious. This makes it very frustrating to point them out to you yet again. Does not feel worth my while frankly. I gave you an argument but I am tired of trying to give you an understanding.
You seem willing to come back and make about any random comment in an effort to have the last word and that is what I am willing to give to you. But you would be deluding yourself into thinking that this would equate to you thereby somehow be proven right. No—I am simply tired of dancing in circles with you. So, if you feel like dancing solo some more, be my guest.
You seem willing to come back and make about any random comment in an effort to have the last word and that is what I am willing to give to you.
My ‘last word’ was here. It is an amicable hat tip and expansion on a reasonable perspective that you provide. How much FAI thinking sounds like a “Rapture of the Nerds”. It also acknowledges our difference in perspective. While we both imagine evolutionary selection pressures as a ‘force’, you see it as one to be embraced and defined by while I see it as one that must be mastered or else.
We’re not going to come closer to agreement than that because we have a fundamentally different moral philosophy which gives us different perspectives on the whole field.
I can empathise. I have often found myself in situations in which I am attempting discourse with someone who appears to me at least to be incapable or unwilling to understand what I am saying. It is particularly frustrating when the other is supporting the position more favoured by the tribe in question and they can gain support while needing far less rigour and coherency.
The fate of a maximiser depends a great deal on its strength relative to other maximisers. It’s utility function is not the only issue—and maximisers with any utility function can easily be eaten by other, more powerful maximisers.
If you look at biology, replicators have survived so far for billions of years with other utility functions. Do you really think biology is “ensuring continued co-existence”—rather than doing the things described in my references? If so, why do you think that? - the view doesn’t seem to make any sense.
Yes Tim—as I pointed out earlier however, under reasonable assumptions an AI will upon self reflection on the circumstances leading to its existence as well as its utility function conclude that a strictly literal interpretation of its utility function would have to be against the implicit wishes of its originator.
This post is a continuation of a discussion with Stefan Pernar—from another thread:
I think there’s something to an absolute morality. Or at least, some moralities are favoured by nature over other ones—and those are the ones we are more likely to see.
That doesn’t mean that there is “one true morality”—since different moral systems might be equally favoured—but rather that moral relativism is dubious—some moralities really are better than other ones.
There have been various formulations of the idea of a natural morality.
One is “goal system zero”—for that, see:
http://rhollerith.com/blog/21
Another is my own “God’s Utility Function”:
http://originoflife.net/gods_utility_function/
...which is my take on Richard Dawkins idea of the same name:
http://en.wikipedia.org/wiki/God’s_utility_function
...but based on Dewar’s maximum entropy principle—rather than on Richard’s selfish genes.
On this site, we are surrounded by moral relativists—who differ from us on the issue of the:
http://en.wikipedia.org/wiki/Is-ought_problem
I do agree with them about one thing—and it’s this:
If it were possible to create a system—driven by self-directed evolution where natural selection played a subsidiary role—it might be possible to temporarily create what I call “handicapped superintelligences”:
http://alife.co.uk/essays/handicapped_superintelligence/
...which are superintelligent agents that deviate dramatically from gods utility function.
So—in that respect, the universe will “tolerate” other moral systems—at least temporarily.
So, in a nutshell, we agree about there being objective basis to morality—but apparently disagree on its formulation.
With unobjectionable values I mean those that would not automatically and eventually lead to one’s extinction. Or more precisely: a utility function becomes irrational when it is intrinsically self limiting in the sense that it will eventually lead to ones inability to generate further utility. Thus my suggested utility function of ‘ensure continued co-existence’
This utility function seems to be the only one that does not end in the inevitable termination of the maximizer.
Not really. You don’t need to co-exist with anything if you out-compete them then turn their raw materials into paperclips.
You keep making the same statements without integrating my previous arguments into your thinking yet fail to expose them as self contradicting or fallacious. This makes it very frustrating to point them out to you yet again. Does not feel worth my while frankly. I gave you an argument but I am tired of trying to give you an understanding.
You seem willing to come back and make about any random comment in an effort to have the last word and that is what I am willing to give to you. But you would be deluding yourself into thinking that this would equate to you thereby somehow be proven right. No—I am simply tired of dancing in circles with you. So, if you feel like dancing solo some more, be my guest.
A side note: these two are not the only reasons to not be persuaded by arguments, although naturally they are the easiest to point out.
My ‘last word’ was here. It is an amicable hat tip and expansion on a reasonable perspective that you provide. How much FAI thinking sounds like a “Rapture of the Nerds”. It also acknowledges our difference in perspective. While we both imagine evolutionary selection pressures as a ‘force’, you see it as one to be embraced and defined by while I see it as one that must be mastered or else.
We’re not going to come closer to agreement than that because we have a fundamentally different moral philosophy which gives us different perspectives on the whole field.
My apologies for failing to see that—did not mean to be antagonizing—just trying to be honest and forthright about my state of mind :-)
I can empathise. I have often found myself in situations in which I am attempting discourse with someone who appears to me at least to be incapable or unwilling to understand what I am saying. It is particularly frustrating when the other is supporting the position more favoured by the tribe in question and they can gain support while needing far less rigour and coherency.
The fate of a maximiser depends a great deal on its strength relative to other maximisers. It’s utility function is not the only issue—and maximisers with any utility function can easily be eaten by other, more powerful maximisers.
If you look at biology, replicators have survived so far for billions of years with other utility functions. Do you really think biology is “ensuring continued co-existence”—rather than doing the things described in my references? If so, why do you think that? - the view doesn’t seem to make any sense.
Yes Tim—as I pointed out earlier however, under reasonable assumptions an AI will upon self reflection on the circumstances leading to its existence as well as its utility function conclude that a strictly literal interpretation of its utility function would have to be against the implicit wishes of its originator.