With unobjectionable values I mean those that would not automatically and eventually lead to one’s extinction. Or more precisely: a utility function becomes irrational when it is intrinsically self limiting in the sense that it will eventually lead to ones inability to generate further utility. Thus my suggested utility function of ‘ensure continued co-existence’
This utility function seems to be the only one that does not end in the inevitable termination of the maximizer.
You keep making the same statements without integrating my previous arguments into your thinking yet fail to expose them as self contradicting or fallacious. This makes it very frustrating to point them out to you yet again. Does not feel worth my while frankly. I gave you an argument but I am tired of trying to give you an understanding.
You seem willing to come back and make about any random comment in an effort to have the last word and that is what I am willing to give to you. But you would be deluding yourself into thinking that this would equate to you thereby somehow be proven right. No—I am simply tired of dancing in circles with you. So, if you feel like dancing solo some more, be my guest.
You seem willing to come back and make about any random comment in an effort to have the last word and that is what I am willing to give to you.
My ‘last word’ was here. It is an amicable hat tip and expansion on a reasonable perspective that you provide. How much FAI thinking sounds like a “Rapture of the Nerds”. It also acknowledges our difference in perspective. While we both imagine evolutionary selection pressures as a ‘force’, you see it as one to be embraced and defined by while I see it as one that must be mastered or else.
We’re not going to come closer to agreement than that because we have a fundamentally different moral philosophy which gives us different perspectives on the whole field.
I can empathise. I have often found myself in situations in which I am attempting discourse with someone who appears to me at least to be incapable or unwilling to understand what I am saying. It is particularly frustrating when the other is supporting the position more favoured by the tribe in question and they can gain support while needing far less rigour and coherency.
The fate of a maximiser depends a great deal on its strength relative to other maximisers. It’s utility function is not the only issue—and maximisers with any utility function can easily be eaten by other, more powerful maximisers.
If you look at biology, replicators have survived so far for billions of years with other utility functions. Do you really think biology is “ensuring continued co-existence”—rather than doing the things described in my references? If so, why do you think that? - the view doesn’t seem to make any sense.
Yes Tim—as I pointed out earlier however, under reasonable assumptions an AI will upon self reflection on the circumstances leading to its existence as well as its utility function conclude that a strictly literal interpretation of its utility function would have to be against the implicit wishes of its originator.
With unobjectionable values I mean those that would not automatically and eventually lead to one’s extinction. Or more precisely: a utility function becomes irrational when it is intrinsically self limiting in the sense that it will eventually lead to ones inability to generate further utility. Thus my suggested utility function of ‘ensure continued co-existence’
This utility function seems to be the only one that does not end in the inevitable termination of the maximizer.
Not really. You don’t need to co-exist with anything if you out-compete them then turn their raw materials into paperclips.
You keep making the same statements without integrating my previous arguments into your thinking yet fail to expose them as self contradicting or fallacious. This makes it very frustrating to point them out to you yet again. Does not feel worth my while frankly. I gave you an argument but I am tired of trying to give you an understanding.
You seem willing to come back and make about any random comment in an effort to have the last word and that is what I am willing to give to you. But you would be deluding yourself into thinking that this would equate to you thereby somehow be proven right. No—I am simply tired of dancing in circles with you. So, if you feel like dancing solo some more, be my guest.
A side note: these two are not the only reasons to not be persuaded by arguments, although naturally they are the easiest to point out.
My ‘last word’ was here. It is an amicable hat tip and expansion on a reasonable perspective that you provide. How much FAI thinking sounds like a “Rapture of the Nerds”. It also acknowledges our difference in perspective. While we both imagine evolutionary selection pressures as a ‘force’, you see it as one to be embraced and defined by while I see it as one that must be mastered or else.
We’re not going to come closer to agreement than that because we have a fundamentally different moral philosophy which gives us different perspectives on the whole field.
My apologies for failing to see that—did not mean to be antagonizing—just trying to be honest and forthright about my state of mind :-)
I can empathise. I have often found myself in situations in which I am attempting discourse with someone who appears to me at least to be incapable or unwilling to understand what I am saying. It is particularly frustrating when the other is supporting the position more favoured by the tribe in question and they can gain support while needing far less rigour and coherency.
The fate of a maximiser depends a great deal on its strength relative to other maximisers. It’s utility function is not the only issue—and maximisers with any utility function can easily be eaten by other, more powerful maximisers.
If you look at biology, replicators have survived so far for billions of years with other utility functions. Do you really think biology is “ensuring continued co-existence”—rather than doing the things described in my references? If so, why do you think that? - the view doesn’t seem to make any sense.
Yes Tim—as I pointed out earlier however, under reasonable assumptions an AI will upon self reflection on the circumstances leading to its existence as well as its utility function conclude that a strictly literal interpretation of its utility function would have to be against the implicit wishes of its originator.