I never see discussion on what the goals of the AI should be. To me this is far more important than any of the things discussed on a day to day basis.
If there is not a competent theory on what the goals of an intelligent system will be, then how can we expect to build it correctly?
Ostensibly, the goal is to make the correct decision. Yet there is nearly no discussion on what constitutes a correct decision. I see lot’s of contributors talking about calculating utilons so that demonstrates that most contributors are hedonistic consequentialist utilitarians.
Am I correct then to assume that the implicit goal of the AI for the majority in the community is to aid in the maximization of human happiness?
If so I think there are serious problems that would be encountered and I think that the goal of maximizing happiness would not be accomplished.
“Utilons” are a stand-in for “whatever it is you actually value”. The psychological state of happiness is one that people value, but not the only thing. So, yes, we tend to support decision making based on consequentialist utilitarianism, but not hedonistic consequentialist utilitarianism.
Upon reading of that link (which I imagine is now fairly outdated?) his theory falls apart under the weight of its coercive nature—as the questioner points out.
It is understood that the impact of an AI will be on all in humanity regardless of it’s implementation if it is used for decision making. As a result consequentialist utilitarianism still holds a majority rule position, as the link talks about, which implies that the decisions that the AI would make would favor a “utility” calculation (Spare me the argument about utilons; as an economist I have previously been neck deep in Bentham).
The discussion at the same time dismisses and reinforces the importance of the debate itself, which seems contrary. I personally think this is a much more important topic than is thought and I have yet to see a compelling argument otherwise.
From the people (researchers) I have talked to about this specifically, the responses I have gotten are: “I’m not interested in that, I want to know how intelligence works” or “I just want to make it work, I’m interested in the science behind it.” And I think this attitude is pervasive. It is ignoring the subject.
Maybe I’m just dense but I have been around a while and searched, yet I haven’t stumbled upon a top level post or anything of the like here on the FHI, SIAI (other than ramblings about what AI could theoretically give us) OB or otherwise which either breaks it down or gives a general consensus.
Machines will probably do what they are told to do—and what they are told to do will probably depend a lot on who owns them and on who built them. Apart from that, I am not sure there is much of a consensus.
We have some books of the topic:
Moral Machines: Teaching Robots Right from Wrong—Wendell Wallach
Beyond AI: Creating The Conscience Of The Machine—J. Storrs Hall
...and probably hundreds of threads—perhaps search for “friendly” or “volition”.
I never see discussion on what the goals of the AI should be. To me this is far more important than any of the things discussed on a day to day basis.
If there is not a competent theory on what the goals of an intelligent system will be, then how can we expect to build it correctly?
Ostensibly, the goal is to make the correct decision. Yet there is nearly no discussion on what constitutes a correct decision. I see lot’s of contributors talking about calculating utilons so that demonstrates that most contributors are hedonistic consequentialist utilitarians.
Am I correct then to assume that the implicit goal of the AI for the majority in the community is to aid in the maximization of human happiness?
If so I think there are serious problems that would be encountered and I think that the goal of maximizing happiness would not be accomplished.
“Utilons” are a stand-in for “whatever it is you actually value”. The psychological state of happiness is one that people value, but not the only thing. So, yes, we tend to support decision making based on consequentialist utilitarianism, but not hedonistic consequentialist utilitarianism.
See also: Coherent Extrapolated Volition
Upon reading of that link (which I imagine is now fairly outdated?) his theory falls apart under the weight of its coercive nature—as the questioner points out.
It is understood that the impact of an AI will be on all in humanity regardless of it’s implementation if it is used for decision making. As a result consequentialist utilitarianism still holds a majority rule position, as the link talks about, which implies that the decisions that the AI would make would favor a “utility” calculation (Spare me the argument about utilons; as an economist I have previously been neck deep in Bentham).
The discussion at the same time dismisses and reinforces the importance of the debate itself, which seems contrary. I personally think this is a much more important topic than is thought and I have yet to see a compelling argument otherwise.
From the people (researchers) I have talked to about this specifically, the responses I have gotten are: “I’m not interested in that, I want to know how intelligence works” or “I just want to make it work, I’m interested in the science behind it.” And I think this attitude is pervasive. It is ignoring the subject.
Of course—which makes them useless as a metric.
Since you seem to speak for everyone in this category—how did you come to the conclusion that this is the optimal philosophy?
Thanks for the link.
The topic of what the goals of the AI should be has been discussed an awful lot.
I think the combination of moral philosopher and machine intelligence expert must be appealing to some types of personality.
Maybe I’m just dense but I have been around a while and searched, yet I haven’t stumbled upon a top level post or anything of the like here on the FHI, SIAI (other than ramblings about what AI could theoretically give us) OB or otherwise which either breaks it down or gives a general consensus.
Can you point me to where you are talking about?
Probably the median of such discussions was on http://www.sl4.org/
Machines will probably do what they are told to do—and what they are told to do will probably depend a lot on who owns them and on who built them. Apart from that, I am not sure there is much of a consensus.
We have some books of the topic:
Moral Machines: Teaching Robots Right from Wrong—Wendell Wallach
Beyond AI: Creating The Conscience Of The Machine—J. Storrs Hall
...and probably hundreds of threads—perhaps search for “friendly” or “volition”.