In the space of all possible values, human values have occupied a very small space, with the main change being who gets counted as moral agent (the consequences of small moral changes can be huge, but the changes themselves don’t seem large in an absolute sense).
Or, if you prefer, I think it’s possible the AI moral value changes will range so widely, that human value can essentially be seen as static in comparison.
If these will define vectors of virtual space of moral values then I am not sure if AI could occupy much bigger space than humans do. (how much selfish or unwelcome or dishonest could AI or human be?)
On the contrary—because we are selfish (is it our moral value which we try to analyze?) we want that AI will be more open, more listening, more honest, more friend (etc) than we want or plan to be. Or at least we are now. (so are we really want that AI will be like us?)
I see the question about optimal level of these values. For example would we like to see agent who will be maximal honest, welcoming and sharing to anybody? (AI at your house which welcome thieves and tell them what they ask and share all?)
And last but not least—if we will have more AI agents then some kind of selfishness and laziness could help. For example to prevent to create singleton or fanatical mob of these agents. In evolution of humankind, selfishness and laziness could help human groups to survive. And lazy paperclip maximizer could save humankind.
We need good mathematical model of laziness, selfishness, openness, brotherhood, friendship, etc. We have hard philosophical tasks with deadline. (singularity is coming and dead in word deadline could be very real)
In the space of all possible values, human values have occupied a very small space, with the main change being who gets counted as moral agent (the consequences of small moral changes can be huge, but the changes themselves don’t seem large in an absolute sense).
Or, if you prefer, I think it’s possible the AI moral value changes will range so widely, that human value can essentially be seen as static in comparison.
I think we need better definition of problem we like to study here. Probably beliefs and values are not so undistinguishable
From this page →
Human values are, for example:
civility, respect, consideration;
honesty, fairness, loyalty, sharing, solidarity;
openness, listening, welcoming, acceptance, recognition, appreciation;
brotherhood, friendship, empathy, compassion, love.
I think none of them we could call belief.
If these will define vectors of virtual space of moral values then I am not sure if AI could occupy much bigger space than humans do. (how much selfish or unwelcome or dishonest could AI or human be?)
On the contrary—because we are selfish (is it our moral value which we try to analyze?) we want that AI will be more open, more listening, more honest, more friend (etc) than we want or plan to be. Or at least we are now. (so are we really want that AI will be like us?)
I see the question about optimal level of these values. For example would we like to see agent who will be maximal honest, welcoming and sharing to anybody? (AI at your house which welcome thieves and tell them what they ask and share all?)
And last but not least—if we will have more AI agents then some kind of selfishness and laziness could help. For example to prevent to create singleton or fanatical mob of these agents. In evolution of humankind, selfishness and laziness could help human groups to survive. And lazy paperclip maximizer could save humankind.
We need good mathematical model of laziness, selfishness, openness, brotherhood, friendship, etc. We have hard philosophical tasks with deadline. (singularity is coming and dead in word deadline could be very real)
I like to add some values which I see not so static and which are proably not so much question about morality:
Privacy and freedom (vs) security and power.
Family, society, tradition.
Individual equality. (disparities of wealth, right to have work, …)
Intellectual properties. (right to own?)