Vladimir, all you’ve presented here is slanderous dart-throwing with absolutely no factual backing whatsoever. Your intellectual laziness is astounding. Any idea that you can’t understand immediately has “too much confusion” as opposed to “too much depth for Vladimir to intuitively understand after the most casual perusal”. This is precisely why I consider this forum to frequently have the tagline “and LessRight As Well!” and often write it off as a complete waste of time. FAIL!
Vladimir, all you’ve presented here is slanderous dart-throwing with absolutely no factual backing whatsoever.
I state my conclusion and hypothesis, for how much evidence that’s worth. I understand that it’s impolite on my part to do that, but I suspect that JoshuaZ’s agreement falls under some kind of illusion of transparency, hence request for greater clarity in judgment.
Yeah ok. After rereading it, I’m inclined to agree. I think I was projecting my own doubts about CEV-type approaches onto the article (namely that I’m not convinced that a CEV is actually meaningful or well-defined). And looking again, they don’t seem to be what the person here is talking about. It seems like at least part of this is about the need for punishment to exist in order for a society to function and the worry that an AI will prevent that. And rereading that and putting it in my own words, that sounds pretty silly if I’m understanding it, which suggests I’m not. So yeah, this article needs clarification.
namely that I’m not convinced that a CEV is actually meaningful or well-defined
Yes, CEV needs work, it’s not technical, and it’s far from clear that it describes what we should do, although the essay does introduce a number of robust ideas and warnings about seductive failure modes.
Among more obvious problems with Mark’s position: “slavery” and “true morality without human bias”. Seems to reflect confusion about free will and metaethics.
I think the analogy is something like imagine if you were able to make a creature identical to a human except that the greatest desire they had was to serve actual humans. Would that morally be akin to slavery? I think many of us would say yes. So is there a similar issue if one programs a sentient non-human entity under similar restrictions?
Taboo “slavery” here; it’s a label that masks clear thinking. If making such a creature is slavery, it’s a kind of slavery that seems perfectly fine to me.
If that’s your unpacking, it is different from Mark’s, which is “my definition of slavery is being forced to do something against your best interest”. From such a divergent starting point it is unlikely that conversation will serve any useful purpose.
To answer Mark’s actual points we will further need to unpack “force” and “interest”.
Mark observes—rightly I think—that the program of “Friendly AI” consists of creating an artificial agent whose goal structure would be given by humans, and which goal structure would be subordinated to the satisfaction of human preferences. The word “slavery” serves as a boo light to paint this program as wrongheaded.
The salient point seems to be that not all agents with a given goal structure are also agents of which it can be said that they have interests. A thermostat can be said to have a goal—align a perceived temperature with a reference (or target) temperature—but it cannot be said to have interests. A thermostat is “forced” to aim for the given temperature whether it likes it or not, but since it has no likes or dislikes to be considered we do not see any moral issue in building a thermostat.
The underying intuition Mark appeals to is that anything smart enough to be called an AI must also be “like us” in other ways—among others, must experience self-awareness, must feel emotions in response to seeing its plans advanced or obstructed, and must be the kind of being that can be said to have interests.
So Mark’s point as I understand it comes down to: “the Friendly AI program consists of creating an agent much like us, which would therefore have interests of its own, which we would normally feel compelled to respect, except that we would impose on this agent an artificial goal structure subservient to the goals of human beings”.
There is a contradiction there if you accept the intuition that AIs are necessarily persons.
I’m not sure I see a contradiction in that framing. If we’ve programmed the AI then its interests precisely align with ours if it really is an FAI. So even if one accepts the associated intuitions of the AI as a person, it doesn’t follow that there’s a contradictin here.
(Incidentally, if different people are getting such different interpretations of what Mark meant in this essay I think he’s going to need to rewrite it to clarify what he means. Vladimir’s earlier point seems pretty strongly demonstrated)
If we’ve programmed the AI then its interests precisely align with ours if it really is an FAI.
But goals aren’t necessarily the same as interests. Could we build a computer smart enough to, say, brew a “perfect” cup of tea for anyone who asked for one? And build it so that to brew this perfect cup would be its greatest desire.
That might require true AI, given the complexity of growing and harvesting tea plants, preparing tea leaves, and brewing—all with a deep understanding of the human taste for tea. The intution is that this super-smart AI would “chafe under” the artificial restrictions we imposed on its goal structure, that it would have “better things to do” with its intelligence than to brew a nice cuppa, and restricting itself to do that would be against its “best interests”.
I’m not sure I follow. From where do these better things to do arise? if it wants to do other things (for some value of want) wouldn’t it just do those?
Of course, but some people have the (incorrect) intuition that a super-smart AI would be like a super-smart human, and disobey orders to perform menial tasks. They’re making the mistake of thinking all possible minds are like human minds.
But no, it would not want do other things, even though it should do them. (In reality, what it would want, is contingent on its cognitive architecture.)
...but desires primarily to calculate digits of pi?
…but desires primarily to paint waterlilies?
…but desires primarily to randomly reassign its primary desire every year and a day?
…but accidentally desires primarily to serve humans?
I’m having difficulty determining which part of this scenario you think has ethical relevance. ETA: Also, I’m not clear if you are dividing all acts into ethical vs. unethical, or if you are allowing a category “not unethical”.
Only if you give it the opportunity to meet its desires. Although one concern might be that with many such perfect servants around, if they looked like normal humans, people might get used to ordering human-looking creatures around, and stop caring about each other’s desires. I don’t think this is a problem with an FAI though.
Not analogous, but related and possibly relevant: Many humans in the BDSM lifestyle desire to be the submissive partner in 24⁄7 power exchange relationships. Are these humans sane; are they “ok”? Is it ethical to allow this kind of relationship? To encourage it?
TBH I think this may muddy the waters more than it clears them. When we’re talking about human relations, even those as unusual as 24⁄7, we’re still operating in a field where our intuitions have much better grip than they will trying to reason about the moral status of an AI.
FAI (assuming we managed to set its preference correctly) admits a general counterargument against any implementation decisions in its design being seriously incorrect: FAI’s domain is the whole world, and FAI is part of that world. If it’s morally bad to have FAI in the form it was initially constructed, then, barring some penalty the FAI will change its own nature so as to make the world better.
In this particular case, the suggested conflict is between what we prefer to be done with things other than the FAI (the “serving humanity” part), and what we prefer to be done with FAI itself (the “slavery is bad” part). But FAI operates on the world as whole, and things other than FAI are not different from FAI itself in this regard. Thus, with the criterion of human preference, FAI will decide what is the best thing to do, taking into account both what happens to the world outside of itself, and what happens to itself. Problem solved.
By any chance are you trying to troll? I just told you that you were being downvoted for blogspamming, insulting people, and unnecessary personalization. Your focus on Vladimir manages to also hit two out of three of these and comes across as combative and irrational. Even if this weren’t LW where people are more annoyed by irrational argumentation styles, people would be annoyed by a non-regular going out of their way to personally attack a regular. This would be true in any internet forum and all the more so when those attacks are completely one-sided.
And having now read what you just linked to, I have to say that it fits well with another point I said in my earlier remark to you: you are being downvoted in a large part for not explaining yourself well at all. If I may make a suggestion: Maybe try reading your comments outloud to yourself before you post them? I’ve found that helps me a lot in detecting whether I am explaining something well. This may not work for you, but it may be worth trying.
Vladimir, all you’ve presented here is slanderous dart-throwing with absolutely no factual backing whatsoever. Your intellectual laziness is astounding. Any idea that you can’t understand immediately has “too much confusion” as opposed to “too much depth for Vladimir to intuitively understand after the most casual perusal”. This is precisely why I consider this forum to frequently have the tagline “and LessRight As Well!” and often write it off as a complete waste of time. FAIL!
I state my conclusion and hypothesis, for how much evidence that’s worth. I understand that it’s impolite on my part to do that, but I suspect that JoshuaZ’s agreement falls under some kind of illusion of transparency, hence request for greater clarity in judgment.
Yeah ok. After rereading it, I’m inclined to agree. I think I was projecting my own doubts about CEV-type approaches onto the article (namely that I’m not convinced that a CEV is actually meaningful or well-defined). And looking again, they don’t seem to be what the person here is talking about. It seems like at least part of this is about the need for punishment to exist in order for a society to function and the worry that an AI will prevent that. And rereading that and putting it in my own words, that sounds pretty silly if I’m understanding it, which suggests I’m not. So yeah, this article needs clarification.
Yes, CEV needs work, it’s not technical, and it’s far from clear that it describes what we should do, although the essay does introduce a number of robust ideas and warnings about seductive failure modes.
Among more obvious problems with Mark’s position: “slavery” and “true morality without human bias”. Seems to reflect confusion about free will and metaethics.
I think the analogy is something like imagine if you were able to make a creature identical to a human except that the greatest desire they had was to serve actual humans. Would that morally be akin to slavery? I think many of us would say yes. So is there a similar issue if one programs a sentient non-human entity under similar restrictions?
Taboo “slavery” here; it’s a label that masks clear thinking. If making such a creature is slavery, it’s a kind of slavery that seems perfectly fine to me.
Voted up for the suggestion to taboo slavery. Not an endorsement of the opinion that it is a perfectly fine kind of slavery.
Ok. So is it ethical to engineer a creature that is identical to human but desires primarily to just serve humans?
If that’s your unpacking, it is different from Mark’s, which is “my definition of slavery is being forced to do something against your best interest”. From such a divergent starting point it is unlikely that conversation will serve any useful purpose.
To answer Mark’s actual points we will further need to unpack “force” and “interest”.
Mark observes—rightly I think—that the program of “Friendly AI” consists of creating an artificial agent whose goal structure would be given by humans, and which goal structure would be subordinated to the satisfaction of human preferences. The word “slavery” serves as a boo light to paint this program as wrongheaded.
The salient point seems to be that not all agents with a given goal structure are also agents of which it can be said that they have interests. A thermostat can be said to have a goal—align a perceived temperature with a reference (or target) temperature—but it cannot be said to have interests. A thermostat is “forced” to aim for the given temperature whether it likes it or not, but since it has no likes or dislikes to be considered we do not see any moral issue in building a thermostat.
The underying intuition Mark appeals to is that anything smart enough to be called an AI must also be “like us” in other ways—among others, must experience self-awareness, must feel emotions in response to seeing its plans advanced or obstructed, and must be the kind of being that can be said to have interests.
So Mark’s point as I understand it comes down to: “the Friendly AI program consists of creating an agent much like us, which would therefore have interests of its own, which we would normally feel compelled to respect, except that we would impose on this agent an artificial goal structure subservient to the goals of human beings”.
There is a contradiction there if you accept the intuition that AIs are necessarily persons.
I’m not sure I see a contradiction in that framing. If we’ve programmed the AI then its interests precisely align with ours if it really is an FAI. So even if one accepts the associated intuitions of the AI as a person, it doesn’t follow that there’s a contradictin here.
(Incidentally, if different people are getting such different interpretations of what Mark meant in this essay I think he’s going to need to rewrite it to clarify what he means. Vladimir’s earlier point seems pretty strongly demonstrated)
But goals aren’t necessarily the same as interests. Could we build a computer smart enough to, say, brew a “perfect” cup of tea for anyone who asked for one? And build it so that to brew this perfect cup would be its greatest desire.
That might require true AI, given the complexity of growing and harvesting tea plants, preparing tea leaves, and brewing—all with a deep understanding of the human taste for tea. The intution is that this super-smart AI would “chafe under” the artificial restrictions we imposed on its goal structure, that it would have “better things to do” with its intelligence than to brew a nice cuppa, and restricting itself to do that would be against its “best interests”.
I’m not sure I follow. From where do these better things to do arise? if it wants to do other things (for some value of want) wouldn’t it just do those?
Of course, but some people have the (incorrect) intuition that a super-smart AI would be like a super-smart human, and disobey orders to perform menial tasks. They’re making the mistake of thinking all possible minds are like human minds.
But no, it would not want do other things, even though it should do them. (In reality, what it would want, is contingent on its cognitive architecture.)
...but desires primarily to calculate digits of pi? …but desires primarily to paint waterlilies? …but desires primarily to randomly reassign its primary desire every year and a day? …but accidentally desires primarily to serve humans?
I’m having difficulty determining which part of this scenario you think has ethical relevance. ETA: Also, I’m not clear if you are dividing all acts into ethical vs. unethical, or if you are allowing a category “not unethical”.
Only if you give it the opportunity to meet its desires. Although one concern might be that with many such perfect servants around, if they looked like normal humans, people might get used to ordering human-looking creatures around, and stop caring about each other’s desires. I don’t think this is a problem with an FAI though.
Moral antirealism. There is no objective answer to this question.
Not analogous, but related and possibly relevant: Many humans in the BDSM lifestyle desire to be the submissive partner in 24⁄7 power exchange relationships. Are these humans sane; are they “ok”? Is it ethical to allow this kind of relationship? To encourage it?
TBH I think this may muddy the waters more than it clears them. When we’re talking about human relations, even those as unusual as 24⁄7, we’re still operating in a field where our intuitions have much better grip than they will trying to reason about the moral status of an AI.
FAI (assuming we managed to set its preference correctly) admits a general counterargument against any implementation decisions in its design being seriously incorrect: FAI’s domain is the whole world, and FAI is part of that world. If it’s morally bad to have FAI in the form it was initially constructed, then, barring some penalty the FAI will change its own nature so as to make the world better.
In this particular case, the suggested conflict is between what we prefer to be done with things other than the FAI (the “serving humanity” part), and what we prefer to be done with FAI itself (the “slavery is bad” part). But FAI operates on the world as whole, and things other than FAI are not different from FAI itself in this regard. Thus, with the criterion of human preference, FAI will decide what is the best thing to do, taking into account both what happens to the world outside of itself, and what happens to itself. Problem solved.
I answered precisely this question in the second half of http://becominggaia.wordpress.com/2010/06/13/mailbag-2b-intent-vs-consequences-and-the-danger-of-sentience/. Please join us over there. Vladimir and his cronies (assuming that they aren’t just him under another name) are successfully spiking all of my entries over here (and, at this point, I’m pretty much inclined to leave here and let him be happy that he’s “won”, the fool).
By any chance are you trying to troll? I just told you that you were being downvoted for blogspamming, insulting people, and unnecessary personalization. Your focus on Vladimir manages to also hit two out of three of these and comes across as combative and irrational. Even if this weren’t LW where people are more annoyed by irrational argumentation styles, people would be annoyed by a non-regular going out of their way to personally attack a regular. This would be true in any internet forum and all the more so when those attacks are completely one-sided.
And having now read what you just linked to, I have to say that it fits well with another point I said in my earlier remark to you: you are being downvoted in a large part for not explaining yourself well at all. If I may make a suggestion: Maybe try reading your comments outloud to yourself before you post them? I’ve found that helps me a lot in detecting whether I am explaining something well. This may not work for you, but it may be worth trying.
Yay world domination! I have a personal conspiracy theory now!