An AGI project would presumably need a generally accepted, watertight, axiom based, formal system of ethics, whose rules can reliably be applied right up to limit cases. I am guessing that that is the reason why Eliezer et al are arguing from the basis that such an animal exists.
If it does, please point to it. The FHI has ethics specialists on its staff, what do they have to say on the subject ?
Based on the current discussion, such an animal, at least as far as ‘generally accepted’ goes, does not exist. My belief is that what we have are more or less consensual guidelines which apply to situations and choices within human experience. Unknown’s examples, for instance, tend to be ‘middle of the range’ ones. When we get towards the limits of everyday experience, these guidelines break down.
Eliezer has not provided us with a formal framework within which summing over single experiences for multiple people can be compared to summing over multiple experiences for one person. For me it stops there.
An AGI project would presumably need a generally accepted, watertight, axiom based, formal system of ethics, whose rules can reliably be applied right up to limit cases. I am guessing that that is the reason why Eliezer et al are arguing from the basis that such an animal exists.
If it does, please point to it. The FHI has ethics specialists on its staff, what do they have to say on the subject ?
Based on the current discussion, such an animal, at least as far as ‘generally accepted’ goes, does not exist. My belief is that what we have are more or less consensual guidelines which apply to situations and choices within human experience. Unknown’s examples, for instance, tend to be ‘middle of the range’ ones. When we get towards the limits of everyday experience, these guidelines break down.
Eliezer has not provided us with a formal framework within which summing over single experiences for multiple people can be compared to summing over multiple experiences for one person. For me it stops there.