But the atheism I have in mind, here, trusts only in the Self, at least as the power at stake scales – and in the limit,only in this slice of Self, the Self-Right-Now. Ultimately, indeed,this Self is the only route to a good future.
I distrust even my Self-Right-Now. “Power corrupts” is a thing, and I’m not sure if being handed direct access to arbitrary amounts of power is safe for anyone, including me-right-now.
I also don’t know how to do “reflection”, or make the kinds of philosophical/intellectual progress needed to eventually figure out how to safely handle arbitrary amounts of power, except as part of a social process along with many peers/equals (i.e., people that I don’t have large power differentials with).
It seems to me that nobody should trust themselves with arbitrary amounts of power or be highly confident that they can successfully “reflect” by themselves, so the “deeper atheism” you talk about here is just wrong or not a viable option? (I’m not sure what your conclusions/takeaways for this post or the sequence as a whole are though, so am unsure how relevant this point is.)
Lord Acton was on to something when he observed that great men become bad men when given great power. “Despotic power is always accompanied by corruption of morality.” I believe this is because morality flows from the knowledge of our own fragility...there but for the grace of God go I, the faithful might say...and the golden rule works because people stand on an equal playing field and recognize that bad actions have a tendency to boomerang and that anyone could be caught in a state of misfortune. So, when we reflect as we are now, weak mortals, we converge on a certain set of values. But, when given enormous powers, it is very likely we will converge on different values, values which might cause our present selves to shriek in disgust. No one can be trusted with absolute power.
I would bet that when we have god-like knowledge of the universe (and presumably have successfully averted the issues we’re discussing here in order to get it), that it will turn out that there have been people who could have been trusted with absolute power.
And that it would have been effectively impossible to identify them ahead of time, so of course, in fact, the interaction feedback network that someone is placed in does matter a lot.
The reason is because while it’s easy to imagine a fair and buildable “utopia”, to a certain definition of the word, and imagine millions of humans you can “trust” to do it.
I mean what do you want, right? Some arcologies where robot police and medical systems prevent violence and all forms of death. Lots of fun stuff to do, anything that isn’t expensive in real resources is free.
The flaw is that if you give a human absolute power, it changes them. They diverge with learning updates from the “stable and safe” configuration you say know someone’s mother as to a tyrant. This is why I keep saying you have to disable learning to make an AI system safe to use. You want the empress in the year 1000 of Her Reign to be making decisions using the same policy as at year 0. (This can be a problem if there are external enemies or new problems ofc)
I distrust even my Self-Right-Now. “Power corrupts” is a thing, and I’m not sure if being handed direct access to arbitrary amounts of power is safe for anyone, including me-right-now.
I also don’t know how to do “reflection”, or make the kinds of philosophical/intellectual progress needed to eventually figure out how to safely handle arbitrary amounts of power, except as part of a social process along with many peers/equals (i.e., people that I don’t have large power differentials with).
It seems to me that nobody should trust themselves with arbitrary amounts of power or be highly confident that they can successfully “reflect” by themselves, so the “deeper atheism” you talk about here is just wrong or not a viable option? (I’m not sure what your conclusions/takeaways for this post or the sequence as a whole are though, so am unsure how relevant this point is.)
Lord Acton was on to something when he observed that great men become bad men when given great power. “Despotic power is always accompanied by corruption of morality.” I believe this is because morality flows from the knowledge of our own fragility...there but for the grace of God go I, the faithful might say...and the golden rule works because people stand on an equal playing field and recognize that bad actions have a tendency to boomerang and that anyone could be caught in a state of misfortune. So, when we reflect as we are now, weak mortals, we converge on a certain set of values. But, when given enormous powers, it is very likely we will converge on different values, values which might cause our present selves to shriek in disgust. No one can be trusted with absolute power.
I would bet that when we have god-like knowledge of the universe (and presumably have successfully averted the issues we’re discussing here in order to get it), that it will turn out that there have been people who could have been trusted with absolute power.
And that it would have been effectively impossible to identify them ahead of time, so of course, in fact, the interaction feedback network that someone is placed in does matter a lot.
Set could have zero elements in it.
The reason is because while it’s easy to imagine a fair and buildable “utopia”, to a certain definition of the word, and imagine millions of humans you can “trust” to do it.
I mean what do you want, right? Some arcologies where robot police and medical systems prevent violence and all forms of death. Lots of fun stuff to do, anything that isn’t expensive in real resources is free.
The flaw is that if you give a human absolute power, it changes them. They diverge with learning updates from the “stable and safe” configuration you say know someone’s mother as to a tyrant. This is why I keep saying you have to disable learning to make an AI system safe to use. You want the empress in the year 1000 of Her Reign to be making decisions using the same policy as at year 0. (This can be a problem if there are external enemies or new problems ofc)