People talk as if inconsistencies and contradictions in our value systems mean the whole enterprise of emulating human morality is worthless. Of course human value systems are contradictory; you can still implement a contradictory value system if you’re willing to accept the occasional mis-calculation.
A deeper problem, in my opinion, is the nature of our behavior. It seems that in a lot of situations people make decisions first then justify them later, often subconsciously. The only way to accurately emulate this is to have a machine that also first makes decisions (perhaps based on some ‘neural net’ simulation obtained from scanning human brains, or even some random process) and then justifies them later. Clearly this is unacceptable, so you need to have a machine that can justify its decisions first. CEV attempts to address this. Instead of saying “do what a person would do,” the idea is to “do what a person or group of people would consider morally justifiable behavior in others.”
There are two kinds of inconsistency, are both dealt with in CEV?
There is internal inconsistency of an individual’s (each individual’s?) morality. Things like pushing the fat guy onto the trolley tracks to save 5 skinny guys.
There is also (possibly) inconsistency between individual humans. A smart good friend of mine over the last 40 years has very different politics from mine, suggesting a different set of values. Sure we agree you shouldn’t kill random people in the city and on like that. But it seems we disagree on the kinds of things that justify forced collective action (taxation, laws). As a simple and frustrating example, he would like to see flag-burning illegal, that is nuts to me.
Is there a plan to have CEV handle the differences in values between different humans? And where do we draw the line at human: a sociopath is pretty obviously human, must CEV be consistent with both my values and a sociopath’s values? If not, are we just picking a subset of humanity, defining a “we” and a “they” and developing “our” CEV?
People talk as if inconsistencies and contradictions in our value systems mean the whole enterprise of emulating human morality is worthless. Of course human value systems are contradictory; you can still implement a contradictory value system if you’re willing to accept the occasional mis-calculation.
Ot you could start a project to research whether the morally relevant subset of value is also a non contradictory subset of value. Just a thought.
People talk as if inconsistencies and contradictions in our value systems mean the whole enterprise of emulating human morality is worthless. Of course human value systems are contradictory; you can still implement a contradictory value system if you’re willing to accept the occasional mis-calculation.
A deeper problem, in my opinion, is the nature of our behavior. It seems that in a lot of situations people make decisions first then justify them later, often subconsciously. The only way to accurately emulate this is to have a machine that also first makes decisions (perhaps based on some ‘neural net’ simulation obtained from scanning human brains, or even some random process) and then justifies them later. Clearly this is unacceptable, so you need to have a machine that can justify its decisions first. CEV attempts to address this. Instead of saying “do what a person would do,” the idea is to “do what a person or group of people would consider morally justifiable behavior in others.”
There are two kinds of inconsistency, are both dealt with in CEV?
There is internal inconsistency of an individual’s (each individual’s?) morality. Things like pushing the fat guy onto the trolley tracks to save 5 skinny guys.
There is also (possibly) inconsistency between individual humans. A smart good friend of mine over the last 40 years has very different politics from mine, suggesting a different set of values. Sure we agree you shouldn’t kill random people in the city and on like that. But it seems we disagree on the kinds of things that justify forced collective action (taxation, laws). As a simple and frustrating example, he would like to see flag-burning illegal, that is nuts to me.
Is there a plan to have CEV handle the differences in values between different humans? And where do we draw the line at human: a sociopath is pretty obviously human, must CEV be consistent with both my values and a sociopath’s values? If not, are we just picking a subset of humanity, defining a “we” and a “they” and developing “our” CEV?
Ot you could start a project to research whether the morally relevant subset of value is also a non contradictory subset of value. Just a thought.