Since when are ‘heh’ and ‘but, yeah’ considered proper arguments guys? Where is the logical fallacy in the presented arguments beyond you not understanding the points that are being made? Follow the links, understand where I am coming from and formulate a response that goes beyond a three or four letter vocalization :-)
Where is the logical fallacy in the presented arguments
The claim “[Compassion is a universal value] = true. (as we have every reason to believe)” was rejected, both implicitly and explicitly by various commenters. This isn’t a logical fallacy but it is cause to dismiss the argument if the readers do not, in fact, have every reason to have said belief.
To be fair, I must admit that the quoted portion probably does not do your position justice. I will read through the paper you mention. I (very strongly) doubt it will lead me to accept B but it may be worth reading.
“This isn’t a logical fallacy but it is cause to dismiss the argument if the readers do not, in fact, have every reason to have said belief.”
But the reasons to change ones view are provided on the site, yet rejected without consideration. How about you read the paper linked under B and should that convince you, maybe you have gained enough provisional trust that reading my writings will not waste your time to suspend your disbelief and follow some of the links in the about page of my blog. Deal?
How about you read the paper linked under B and should that convince you
I have read B. It isn’t bad. The main problem I have with it is that the language used blurs the line between “AIs will inevitably tend to” and “it is important that the AI you create will”. This leaves plenty of scope for confusion.
I’ve read through some of your blog and have found that I consistently disagree with a lot of what you say. The most significant disagreement can be traced back to the assumption of a universal absolute ‘Rational’ morality. This passage was a good illustration:
Moral relativists need to understand that they can not eat the cake and keep it too. If you claim that values are relative, yet at the same time argue for any particular set of values to be implemented in a super rational AI you would have to concede that this set of values – just as any other set of values according to your own relativism – is utterly whimsical, and that being the case, what reason (you being the great rationalist, remember?) do you have to want them to be implemented in the first place?
You see, I plan to eat my cake but don’t expect to be able to keep it. My set of values are utterly whimsical (in the sense that they are arbitrary and not in the sense of incomprehension that the Ayn Rand quotes you link to describe). The reasons for my desires can be described biologically, evolutionarily or with physics of a suitable resolution. But now that I have them they are mine and I need no further reason.
“My set of values are utterly whimsical [...] The reasons for my desires can be described biologically, evolutionarily or with physics of a suitable resolution. But now that I have them they are mine and I need no further reason.”
If that is your stated position then in what way can you claim to create FAI with this whimsical set of goals? This is the crux you see: unless you find some unobjectionable set of values (such as in rational morality ‘existence is preferable over non-existence’ ⇒ utility = continued existence ⇒ modified to ensure continued co-existence with the ‘other’ to make it unobjectionable ⇒ apply rationality in line with microeconomic theory to maximize this utility et cetera) you will end up being a deluded self serving optimizer.
If that is your stated position then in what way can you claim to create FAI with this whimsical set of goals?
Were it within my power to do so I would create a machine that was really, really good at doing things I like. It is that simple. This machine is (by definition) ‘Friendly’ to me.
you will end up being a deluded self serving optimizer.
I don’t know where the ‘deluded’ bit comes from but yes, I would end up being a self serving optimizer. Fortunately for everyone else my utility function places quite a lot of value on the whims of other people. My self serving interests are beneficial to others too because I am actually quite a compassionate and altruistic guy.
PS: Instead of using quotation marks you can put a ‘>’ at the start of a quoted line. This convention makes quotations far easier to follow. And looks prettier.
There is no such thing as an “unobjectionable set of values”.
Imagine the values of an agent that wants all the atoms in the universe for its own ends. It will object to any other agent’s values—since it objects to the very existence of other agents—since those agents use up its precious atoms—and put them into “wrong” configurations.
Whatever values you have, they seem bound to piss off somebody.
There is no such thing as an “unobjectionable set of values”.
And here I disagree. Firstly see my comment about utility function interpretation on another post of yours. Secondly, as soon as one assumes existence as being preferable over non-existence you can formulate a set of unobjectionable values (http://www.jame5.com/?p=45 and http://rationalmorality.info/?p=124). But granted, if you do not want to exist nor have a desire to be rational then rational morality has in fact little to offer you. Non existence and irrational behavior being so trivial goals to achieve after all that it would hardly require – nor value and thus seek for that mater – well thought out advice.
Alas, the first link seems almost too silly to bother with to me, but briefly:
Unobjectionable—to whom? An agent objecting to another agent’s values is a simple and trivial occurrence. All an agent has to do is to state that—according to its values—it wants to use the atoms of the agent with the supposedly unobjectionable utility function for something else.
“Ensure continued co-existence” is vague and wishy-washy. Perhaps publicly work through some “trolley problems” using it—so people have some idea of what you think it means.
You claim there can be no rational objection to your preferred utility function.
In fact, an agent with a different utility function can (obviously) object to its existence—on grounds of instrumental rationality. I am not clear on why you don’t seem to recognise this.
Since when are ‘heh’ and ‘but, yeah’ considered proper arguments guys? Where is the logical fallacy in the presented arguments beyond you not understanding the points that are being made? Follow the links, understand where I am coming from and formulate a response that goes beyond a three or four letter vocalization :-)
The claim “[Compassion is a universal value] = true. (as we have every reason to believe)” was rejected, both implicitly and explicitly by various commenters. This isn’t a logical fallacy but it is cause to dismiss the argument if the readers do not, in fact, have every reason to have said belief.
To be fair, I must admit that the quoted portion probably does not do your position justice. I will read through the paper you mention. I (very strongly) doubt it will lead me to accept B but it may be worth reading.
“This isn’t a logical fallacy but it is cause to dismiss the argument if the readers do not, in fact, have every reason to have said belief.”
But the reasons to change ones view are provided on the site, yet rejected without consideration. How about you read the paper linked under B and should that convince you, maybe you have gained enough provisional trust that reading my writings will not waste your time to suspend your disbelief and follow some of the links in the about page of my blog. Deal?
I have read B. It isn’t bad. The main problem I have with it is that the language used blurs the line between “AIs will inevitably tend to” and “it is important that the AI you create will”. This leaves plenty of scope for confusion.
I’ve read through some of your blog and have found that I consistently disagree with a lot of what you say. The most significant disagreement can be traced back to the assumption of a universal absolute ‘Rational’ morality. This passage was a good illustration:
You see, I plan to eat my cake but don’t expect to be able to keep it. My set of values are utterly whimsical (in the sense that they are arbitrary and not in the sense of incomprehension that the Ayn Rand quotes you link to describe). The reasons for my desires can be described biologically, evolutionarily or with physics of a suitable resolution. But now that I have them they are mine and I need no further reason.
“My set of values are utterly whimsical [...] The reasons for my desires can be described biologically, evolutionarily or with physics of a suitable resolution. But now that I have them they are mine and I need no further reason.”
If that is your stated position then in what way can you claim to create FAI with this whimsical set of goals? This is the crux you see: unless you find some unobjectionable set of values (such as in rational morality ‘existence is preferable over non-existence’ ⇒ utility = continued existence ⇒ modified to ensure continued co-existence with the ‘other’ to make it unobjectionable ⇒ apply rationality in line with microeconomic theory to maximize this utility et cetera) you will end up being a deluded self serving optimizer.
Were it within my power to do so I would create a machine that was really, really good at doing things I like. It is that simple. This machine is (by definition) ‘Friendly’ to me.
I don’t know where the ‘deluded’ bit comes from but yes, I would end up being a self serving optimizer. Fortunately for everyone else my utility function places quite a lot of value on the whims of other people. My self serving interests are beneficial to others too because I am actually quite a compassionate and altruistic guy.
PS: Instead of using quotation marks you can put a ‘>’ at the start of a quoted line. This convention makes quotations far easier to follow. And looks prettier.
There is no such thing as an “unobjectionable set of values”.
Imagine the values of an agent that wants all the atoms in the universe for its own ends. It will object to any other agent’s values—since it objects to the very existence of other agents—since those agents use up its precious atoms—and put them into “wrong” configurations.
Whatever values you have, they seem bound to piss off somebody.
And here I disagree. Firstly see my comment about utility function interpretation on another post of yours. Secondly, as soon as one assumes existence as being preferable over non-existence you can formulate a set of unobjectionable values (http://www.jame5.com/?p=45 and http://rationalmorality.info/?p=124). But granted, if you do not want to exist nor have a desire to be rational then rational morality has in fact little to offer you. Non existence and irrational behavior being so trivial goals to achieve after all that it would hardly require – nor value and thus seek for that mater – well thought out advice.
Alas, the first link seems almost too silly to bother with to me, but briefly:
Unobjectionable—to whom? An agent objecting to another agent’s values is a simple and trivial occurrence. All an agent has to do is to state that—according to its values—it wants to use the atoms of the agent with the supposedly unobjectionable utility function for something else.
“Ensure continued co-existence” is vague and wishy-washy. Perhaps publicly work through some “trolley problems” using it—so people have some idea of what you think it means.
You claim there can be no rational objection to your preferred utility function.
In fact, an agent with a different utility function can (obviously) object to its existence—on grounds of instrumental rationality. I am not clear on why you don’t seem to recognise this.