Lord Acton was on to something when he observed that great men become bad men when given great power. “Despotic power is always accompanied by corruption of morality.” I believe this is because morality flows from the knowledge of our own fragility...there but for the grace of God go I, the faithful might say...and the golden rule works because people stand on an equal playing field and recognize that bad actions have a tendency to boomerang and that anyone could be caught in a state of misfortune. So, when we reflect as we are now, weak mortals, we converge on a certain set of values. But, when given enormous powers, it is very likely we will converge on different values, values which might cause our present selves to shriek in disgust. No one can be trusted with absolute power.
I would bet that when we have god-like knowledge of the universe (and presumably have successfully averted the issues we’re discussing here in order to get it), that it will turn out that there have been people who could have been trusted with absolute power.
And that it would have been effectively impossible to identify them ahead of time, so of course, in fact, the interaction feedback network that someone is placed in does matter a lot.
The reason is because while it’s easy to imagine a fair and buildable “utopia”, to a certain definition of the word, and imagine millions of humans you can “trust” to do it.
I mean what do you want, right? Some arcologies where robot police and medical systems prevent violence and all forms of death. Lots of fun stuff to do, anything that isn’t expensive in real resources is free.
The flaw is that if you give a human absolute power, it changes them. They diverge with learning updates from the “stable and safe” configuration you say know someone’s mother as to a tyrant. This is why I keep saying you have to disable learning to make an AI system safe to use. You want the empress in the year 1000 of Her Reign to be making decisions using the same policy as at year 0. (This can be a problem if there are external enemies or new problems ofc)
Lord Acton was on to something when he observed that great men become bad men when given great power. “Despotic power is always accompanied by corruption of morality.” I believe this is because morality flows from the knowledge of our own fragility...there but for the grace of God go I, the faithful might say...and the golden rule works because people stand on an equal playing field and recognize that bad actions have a tendency to boomerang and that anyone could be caught in a state of misfortune. So, when we reflect as we are now, weak mortals, we converge on a certain set of values. But, when given enormous powers, it is very likely we will converge on different values, values which might cause our present selves to shriek in disgust. No one can be trusted with absolute power.
I would bet that when we have god-like knowledge of the universe (and presumably have successfully averted the issues we’re discussing here in order to get it), that it will turn out that there have been people who could have been trusted with absolute power.
And that it would have been effectively impossible to identify them ahead of time, so of course, in fact, the interaction feedback network that someone is placed in does matter a lot.
Set could have zero elements in it.
The reason is because while it’s easy to imagine a fair and buildable “utopia”, to a certain definition of the word, and imagine millions of humans you can “trust” to do it.
I mean what do you want, right? Some arcologies where robot police and medical systems prevent violence and all forms of death. Lots of fun stuff to do, anything that isn’t expensive in real resources is free.
The flaw is that if you give a human absolute power, it changes them. They diverge with learning updates from the “stable and safe” configuration you say know someone’s mother as to a tyrant. This is why I keep saying you have to disable learning to make an AI system safe to use. You want the empress in the year 1000 of Her Reign to be making decisions using the same policy as at year 0. (This can be a problem if there are external enemies or new problems ofc)