Specifying what humans value seems to be close to what professional ethicists are working on. Are they producing work which will be helpful to building useful AI?
I think of delineating human values as an impossible task. Any human is a living river of change and authentic values only apply to one individual at one moment in time. For instance, much as I want to eat a cookie (yum!), I don’t because I’m watching my weight (health). But then I hear I’ve got 3 months to live and I devour it (grab the gusto). There are three competing authentic values shifting into prominence within a short time. Would the real value please stand up?
Authentic human values could only be approximated in inverse proportion to their detail. So any motivator would be deemed “good” with the proximity it has to one’s own desires of the moment. One of the great things about history is that it’s a contention of differing values and ideas. Thank God nobody has “won” once and for all, but with superintelligence there could only be one final value system that would have to be “good enough” for all.
Ironically, the only reasonably equitable motivator would be one that preserves the natural order (including our biological survival) along with a system of random fate compulsory for all. Hm, this is exactly what we have now! In terms of nature (not politics) perhaps it’s a pretty good design after all! Now that our tools have grown so powerful in comparison to the globe, the idea of “improving” on nature’s design scares me to death, like trying to improve on the cosmological constant.
The picture of Superintelligence as having and allowing a single values systems is a Yudowsky/Bostrom construct. They go down this road because they anticipate disaster along other roads.
Meanwhile, people invariably will want things that get in the way of other people’s wants.
With or without AGI, some goods will be scarce. Government and commerce will still have to distribute these goods among people.
For example, some people will wish to have as many children or other progeny as they can afford, and AI and medical technology will make it easier for people to feed and care for more children.
There is no way to accommodate all of the people who want as many children as possible exactly when they want them.
What values scheme successfully trades off among the prerogatives of all people who want many progeny? After a point, if they persist in thinking this, the many people who share this view eventually need to compromise through some mechanism.
The child-wanters will also be forced to trade off their goals with those who hope to preserve a pristine environment as much as possible.
There is no reconciling these people’s goals completely. Maybe we can arbitrate between them and prevent outcomes which satisfy nobody. Sometimes, we can show that one or another person’s goals are internally inconsistent.
There is no obvious way to show that the child-wanter’s view is superior to the environment-preserver’s view, either. Both will occasionally find themselves in conflict with those people who personally want to live for as long as they possibly can.
Neither AGI nor “Coherent Extrapolated Volition” solves the argument among child-wanters, and it does not solve the argument between child-wanters, environment-preservers and long-livers.
Perhaps some parties could be “re-educated” or medicated out of their initial belief and find themselves just as happy or happier in the end.
Perhaps at critical moments before people have fully formulated their values, it is OK for the group to steer their value system in one direction or another? We do that with children and adults all of the time.
I anticipate that IT and AI technology will make value-shifting people and populations more and more feasible.
When is that allowable? I think we need to work that one out pretty well before we start up an AGI which is even moderately good at persuading people to change their values.
Specifying what humans value seems to be close to what professional ethicists are working on. Are they producing work which will be helpful to building useful AI?
I think of delineating human values as an impossible task. Any human is a living river of change and authentic values only apply to one individual at one moment in time. For instance, much as I want to eat a cookie (yum!), I don’t because I’m watching my weight (health). But then I hear I’ve got 3 months to live and I devour it (grab the gusto). There are three competing authentic values shifting into prominence within a short time. Would the real value please stand up?
Authentic human values could only be approximated in inverse proportion to their detail. So any motivator would be deemed “good” with the proximity it has to one’s own desires of the moment. One of the great things about history is that it’s a contention of differing values and ideas. Thank God nobody has “won” once and for all, but with superintelligence there could only be one final value system that would have to be “good enough” for all.
Ironically, the only reasonably equitable motivator would be one that preserves the natural order (including our biological survival) along with a system of random fate compulsory for all. Hm, this is exactly what we have now! In terms of nature (not politics) perhaps it’s a pretty good design after all! Now that our tools have grown so powerful in comparison to the globe, the idea of “improving” on nature’s design scares me to death, like trying to improve on the cosmological constant.
The picture of Superintelligence as having and allowing a single values systems is a Yudowsky/Bostrom construct. They go down this road because they anticipate disaster along other roads.
Meanwhile, people invariably will want things that get in the way of other people’s wants.
With or without AGI, some goods will be scarce. Government and commerce will still have to distribute these goods among people.
For example, some people will wish to have as many children or other progeny as they can afford, and AI and medical technology will make it easier for people to feed and care for more children.
There is no way to accommodate all of the people who want as many children as possible exactly when they want them.
What values scheme successfully trades off among the prerogatives of all people who want many progeny? After a point, if they persist in thinking this, the many people who share this view eventually need to compromise through some mechanism.
The child-wanters will also be forced to trade off their goals with those who hope to preserve a pristine environment as much as possible.
There is no reconciling these people’s goals completely. Maybe we can arbitrate between them and prevent outcomes which satisfy nobody. Sometimes, we can show that one or another person’s goals are internally inconsistent.
There is no obvious way to show that the child-wanter’s view is superior to the environment-preserver’s view, either. Both will occasionally find themselves in conflict with those people who personally want to live for as long as they possibly can.
Neither AGI nor “Coherent Extrapolated Volition” solves the argument among child-wanters, and it does not solve the argument between child-wanters, environment-preservers and long-livers.
Perhaps some parties could be “re-educated” or medicated out of their initial belief and find themselves just as happy or happier in the end.
Perhaps at critical moments before people have fully formulated their values, it is OK for the group to steer their value system in one direction or another? We do that with children and adults all of the time.
I anticipate that IT and AI technology will make value-shifting people and populations more and more feasible.
When is that allowable? I think we need to work that one out pretty well before we start up an AGI which is even moderately good at persuading people to change their values.