Moral Universalism could be true in some sense, but not automatically compelling, and the AI would need to be programmed to find and/or follow it.
My original post had this possibility. Where you make the AI that develops much of the morality (which it would really have to). edit: note that the AI in question may be just a theorem prover which tries to find some universal moral axioms, but is not itself moral or compelled to implement anything in real world.
There could be a uniquely specified human morality that fulfills much of the same purpose Moral Universalism does for humans.
What’s in 10 millions years? 100 millions? A straitjacket for intelligent life.
It might be possible to specify what we want in a more dynamic way than freezing in current customs.
We would still want some limits from our values right now, e.g. so that the society wouldn’t steer itself to suicide somehow. Even rules like ’it is good if 99% of people agree with it” can steer us into some really nasty futures over the time. Other issue is the possibility of de-evolution of human intelligence. We would not want to lock in all the customs, but some of the values of the today, would get frozen in.
(examples chosen for being at different points in the spectrum between the two options, not for being likely)
Moral Universalism could be true in some sense, but not automatically compelling, and the AI would need to be programmed to find and/or follow it.
There could be a uniquely specified human morality that fulfills much of the same purpose Moral Universalism does for humans.
It might be possible to specify what we want in a more dynamic way than freezing in current customs.
My original post had this possibility. Where you make the AI that develops much of the morality (which it would really have to). edit: note that the AI in question may be just a theorem prover which tries to find some universal moral axioms, but is not itself moral or compelled to implement anything in real world.
What’s in 10 millions years? 100 millions? A straitjacket for intelligent life.
We would still want some limits from our values right now, e.g. so that the society wouldn’t steer itself to suicide somehow. Even rules like ’it is good if 99% of people agree with it” can steer us into some really nasty futures over the time. Other issue is the possibility of de-evolution of human intelligence. We would not want to lock in all the customs, but some of the values of the today, would get frozen in.