EY: “human cognitive psychology has not had time to change evolutionarily over that period”
Under selective pressures, human populations can and have significantly changed in less than two thousand years. Various behavioral traits are highly heritable. Genghis Khan spread his behavioral genotype throughout Asia. (For this discussion this is a nitpick but I dislike seeing false memes spread.)
re: FAI and morality
From my perspective morality is a collection of rules that make cooperative behavior beneficial. There are some rules that should apply to any entities that compete for resources or can cooperate for mutual benefit. There are some rules that improved fitness in our animal predecessors and have become embedded in the brain structure of the typical human. There are some rules that are culture specific and change rapidly as the environment changes. (When your own children are likely to die of starvation, your society is much less concerned about children starving in distant lands. Much of modern Western morality is an outcome of the present wealth and security of Western nations.)
As a start I suggest that a FAI should first discover those three types of rules, including how the rules vary among different animals and different cultures. (This would be an ongoing analysis that would evolve as the FAI capabilities increased.) For cultural rules, the FAI would look for a subset of rules that permit different cultures to interact and prosper. Rules such as kill all strangers would be discarded. Rules such as forgive all trespasses would be discarded as they don’t permit defense against aggressive memes. A modified form of tit-for-tat might emerge. Some punishment, some forgiveness, recognition that bad events happen with no one to blame, some allowance for misunderstandings, some allowance for penance or regret, some tolerance for diversity. Another good rule might be to provide everyone with a potential path to a better existence, i.e., use carrots as well as sticks. Look for a consistent set of cultural rules that furthers happiness, diversity, sustainability, growth, and increased prosperity. Look for rules that are robust, i.e., give acceptable results under a variety of societal environments.
A similar analysis of animal morality would produce another set of rules. As would an analysis of rules for transactions between any entities. The FAI would then use a weighted sum of the three types of moral rules. The weights would change as society changed, i.e., when most of society consists of humans then human culture rules would be given the greatest weight. The FAI would plan for future changes in society by choosing rules that permit a smooth transition from a human centered society to an enhanced human plus AI society and then finally to an AI with human origins future.
Humans might only understand the rules that applied to humans. The FAI would enforce a different subset of rules for non-human biological entities and another subset for AI’s. Other rules would guide interactions between different types of entities. (My mental model is of a body made up of cells, each expressing proteins in a manner appropriate for the specific tissue while contributing to and benefitting from the complete animal system. Rules for each specific cell type and rules for cells interacting.)
The transition shouldn’t feel too bad to the citizens at any stage and the FAI wouldn’t be locked into an outdated morality. We might not recognize or like our children but at least we wouldn’t feel our throats being cut.
EY: “human cognitive psychology has not had time to change evolutionarily over that period”
Under selective pressures, human populations can and have significantly changed in less than two thousand years. Various behavioral traits are highly heritable. Genghis Khan spread his behavioral genotype throughout Asia. (For this discussion this is a nitpick but I dislike seeing false memes spread.)
re: FAI and morality
From my perspective morality is a collection of rules that make cooperative behavior beneficial. There are some rules that should apply to any entities that compete for resources or can cooperate for mutual benefit. There are some rules that improved fitness in our animal predecessors and have become embedded in the brain structure of the typical human. There are some rules that are culture specific and change rapidly as the environment changes. (When your own children are likely to die of starvation, your society is much less concerned about children starving in distant lands. Much of modern Western morality is an outcome of the present wealth and security of Western nations.)
As a start I suggest that a FAI should first discover those three types of rules, including how the rules vary among different animals and different cultures. (This would be an ongoing analysis that would evolve as the FAI capabilities increased.) For cultural rules, the FAI would look for a subset of rules that permit different cultures to interact and prosper. Rules such as kill all strangers would be discarded. Rules such as forgive all trespasses would be discarded as they don’t permit defense against aggressive memes. A modified form of tit-for-tat might emerge. Some punishment, some forgiveness, recognition that bad events happen with no one to blame, some allowance for misunderstandings, some allowance for penance or regret, some tolerance for diversity. Another good rule might be to provide everyone with a potential path to a better existence, i.e., use carrots as well as sticks. Look for a consistent set of cultural rules that furthers happiness, diversity, sustainability, growth, and increased prosperity. Look for rules that are robust, i.e., give acceptable results under a variety of societal environments.
A similar analysis of animal morality would produce another set of rules. As would an analysis of rules for transactions between any entities. The FAI would then use a weighted sum of the three types of moral rules. The weights would change as society changed, i.e., when most of society consists of humans then human culture rules would be given the greatest weight. The FAI would plan for future changes in society by choosing rules that permit a smooth transition from a human centered society to an enhanced human plus AI society and then finally to an AI with human origins future.
Humans might only understand the rules that applied to humans. The FAI would enforce a different subset of rules for non-human biological entities and another subset for AI’s. Other rules would guide interactions between different types of entities. (My mental model is of a body made up of cells, each expressing proteins in a manner appropriate for the specific tissue while contributing to and benefitting from the complete animal system. Rules for each specific cell type and rules for cells interacting.)
The transition shouldn’t feel too bad to the citizens at any stage and the FAI wouldn’t be locked into an outdated morality. We might not recognize or like our children but at least we wouldn’t feel our throats being cut.