I was referring, for instance, to the point that there are evolutionary reasons why we’d expect to find (as we do) that an understanding of deontological injunctions is fairly universal among humans.
EY’s theory linked in the 1st post that deontological injunctions evolved as some sort of additional defense against black swan events does not appear especially convincing to me. The cortex is intrinsically predictive consequentialist at a low level, but simple deontological rules are vast computational shortcuts.
An animal brain learns the hard way, the way AIXI does, thoroughly consequentialist at first, but once predictable pattern matches are learned at higher levels they can be sometimes simplified down to simpler rules for quick decisions.
Even non-verbal animals find ways to pass down some knowledge to their offspring, but in humans this is vastly amplified through language.
Every time a parent tells a child what to do, the parent is transmitting complex consequentualist results down to the younger mind in the form of simpler cached deontological behaviors. Ex: It would be painful for the child to learn a firsthand consequentualist account of why stealing is detrimental (the tribe will punish you).
Once this machinery was in place, it could extend over generations and develop into more complex cultural and religious deontologies. All of this can be accomplished through cortical reinforcement learning as the child develops.
Feral children, for all intents and purposes, act like feral animals. Human minds are cultural/linguistic software phenomena.
Not to mention that conveying a concept to a human carries no instructions; programming concepts into an AI is all instructions
I’m not aware of any practical approach to AI which consists of programming concepts directly into an AI. All modern approaches program only the equivalent of an empty brain, the concepts and resulting mind forms through learning.
Humans concepts are expressed in natural language, and for an AGI to compete with humans it will need to learn extant human knowledge. Learning natural language thus seems like the most practical approach.
“Expected utility maximisation” is, by definition what actually represents our best outcome. To the extent that it doesn’t, it is a failure of our ability to grasp and apply the concept, not a failure in the concept itself.
The problem is this: if we define an algorithm to represent our best outcome and use that as the standard of rationality, and the algorithm’s predictions then differ significantly from actual human decisions: is it a problem with the algorithm or the human mind?
If we had an algorithm that represented a human mind perfectly, then that mind would always be rational by that definition.
Even if deontological injunctions are only transmitted through language, they are based on human predispositions (read brain wiring) to act morally and cooperate, which has evolved.
This somewhat applies to animals too, there’s been research on altruism in animals.
EY’s theory linked in the 1st post that deontological injunctions evolved as some sort of additional defense against black swan events does not appear especially convincing to me. The cortex is intrinsically predictive consequentialist at a low level, but simple deontological rules are vast computational shortcuts.
An animal brain learns the hard way, the way AIXI does, thoroughly consequentialist at first, but once predictable pattern matches are learned at higher levels they can be sometimes simplified down to simpler rules for quick decisions.
Even non-verbal animals find ways to pass down some knowledge to their offspring, but in humans this is vastly amplified through language.
Every time a parent tells a child what to do, the parent is transmitting complex consequentualist results down to the younger mind in the form of simpler cached deontological behaviors. Ex: It would be painful for the child to learn a firsthand consequentualist account of why stealing is detrimental (the tribe will punish you).
Once this machinery was in place, it could extend over generations and develop into more complex cultural and religious deontologies. All of this can be accomplished through cortical reinforcement learning as the child develops.
Feral children, for all intents and purposes, act like feral animals. Human minds are cultural/linguistic software phenomena.
I’m not aware of any practical approach to AI which consists of programming concepts directly into an AI. All modern approaches program only the equivalent of an empty brain, the concepts and resulting mind forms through learning.
Humans concepts are expressed in natural language, and for an AGI to compete with humans it will need to learn extant human knowledge. Learning natural language thus seems like the most practical approach.
The problem is this: if we define an algorithm to represent our best outcome and use that as the standard of rationality, and the algorithm’s predictions then differ significantly from actual human decisions: is it a problem with the algorithm or the human mind?
If we had an algorithm that represented a human mind perfectly, then that mind would always be rational by that definition.
Even if deontological injunctions are only transmitted through language, they are based on human predispositions (read brain wiring) to act morally and cooperate, which has evolved.
This somewhat applies to animals too, there’s been research on altruism in animals.