“Utilons” are a stand-in for “whatever it is you actually value”. The psychological state of happiness is one that people value, but not the only thing. So, yes, we tend to support decision making based on consequentialist utilitarianism, but not hedonistic consequentialist utilitarianism.
Upon reading of that link (which I imagine is now fairly outdated?) his theory falls apart under the weight of its coercive nature—as the questioner points out.
It is understood that the impact of an AI will be on all in humanity regardless of it’s implementation if it is used for decision making. As a result consequentialist utilitarianism still holds a majority rule position, as the link talks about, which implies that the decisions that the AI would make would favor a “utility” calculation (Spare me the argument about utilons; as an economist I have previously been neck deep in Bentham).
The discussion at the same time dismisses and reinforces the importance of the debate itself, which seems contrary. I personally think this is a much more important topic than is thought and I have yet to see a compelling argument otherwise.
From the people (researchers) I have talked to about this specifically, the responses I have gotten are: “I’m not interested in that, I want to know how intelligence works” or “I just want to make it work, I’m interested in the science behind it.” And I think this attitude is pervasive. It is ignoring the subject.
“Utilons” are a stand-in for “whatever it is you actually value”. The psychological state of happiness is one that people value, but not the only thing. So, yes, we tend to support decision making based on consequentialist utilitarianism, but not hedonistic consequentialist utilitarianism.
See also: Coherent Extrapolated Volition
Upon reading of that link (which I imagine is now fairly outdated?) his theory falls apart under the weight of its coercive nature—as the questioner points out.
It is understood that the impact of an AI will be on all in humanity regardless of it’s implementation if it is used for decision making. As a result consequentialist utilitarianism still holds a majority rule position, as the link talks about, which implies that the decisions that the AI would make would favor a “utility” calculation (Spare me the argument about utilons; as an economist I have previously been neck deep in Bentham).
The discussion at the same time dismisses and reinforces the importance of the debate itself, which seems contrary. I personally think this is a much more important topic than is thought and I have yet to see a compelling argument otherwise.
From the people (researchers) I have talked to about this specifically, the responses I have gotten are: “I’m not interested in that, I want to know how intelligence works” or “I just want to make it work, I’m interested in the science behind it.” And I think this attitude is pervasive. It is ignoring the subject.
Of course—which makes them useless as a metric.
Since you seem to speak for everyone in this category—how did you come to the conclusion that this is the optimal philosophy?
Thanks for the link.