That percentage changes rather drastically through human history and gods are supposed to be if not eternal than at least a bit more longer-lasting than religious fads
Those numbers are an approximation to what I would consider the proper prior, which would be the percentages of people throughout all of spacetime’s eternal block universe who have ever held those beliefs. Those percentages are fixed and arguably eternal, but alas, difficult to ascertain at this moment in time. We cannot know what people will believe in the future, but I would actually count the past beliefs of long dead humans along with the present population if possible. Given the difficulties in surveying the dead, I note that due to population growth, a significant fraction of humans who were ever alive are alive today, and that since we would probably weight more modern human’s opinions more highly than our ancestors, and that to a significant degree people’s ancestors beliefs influence their beliefs, that taking a snapshot of beliefs today is not as bad an approximation as you might think.. Again, this is about selecting a better than uniform prior.
So… if—how did you put it? -- “a benevolent superintelligence already exists and dominates the universe” then you have nothing to worry about with respect to rogue AIs doing unfortunate things with paperclips, right?
The probability of this statement is high, but I don’t actually know for certain anymore than a hypothetical superintelligence would. I am fairly confident that some kind of benevolent superintelligence would step in if a Paperclip Maximizer were to emerge, but I would prefer avoiding the potential collateral damage that the ensuing conflict might require, and so if it is possible to prevent the emergence of the Paperclip Maximizer through something as simple as spreading this thought experiment, I am inclined to think it worth doing, and perhaps exactly what a benevolent superintelligence would want me to do.
For the same reason that the existence of God does not stop me from going to the doctor or being proactive about problems, this theorem should not be taken as an argument for inaction on the issue of A.I. existential risk. Even if God exists, it’s clear that said God allows a lot of rather horrific things to happen and does not seem particularly interested in suspending the laws of cause and effect for our mere convenience. If anything, the powers that be, whatever they are, seem to work behind the scenes as much as possible. It also appears that God prefers to be doubted, possibly because if we knew God existed, we’d suck up and become dependent and it would be much more difficult to ascertain people’s intentions from their actions or get them to grow into the people they potentially can be.
Also, how can you attack an entity that you’re not even sure exists? It is in many ways the plausible deniability of God that is the ultimate defensive measure. If God were to assume an undeniable physical form and visit us, there is a non-zero chance of an assassination attempt with nuclear weapons.
All things considered then, there is no guarantee that rogue Paperclip Maximizers won’t arise to provide humanity with yet another lesson in humility.
Those numbers are an approximation to what I would consider the proper prior, which would be the percentages of people throughout all of spacetime’s eternal block universe who have ever held those beliefs. Those percentages are fixed and arguably eternal, but alas, difficult to ascertain at this moment in time. We cannot know what people will believe in the future, but I would actually count the past beliefs of long dead humans along with the present population if possible. Given the difficulties in surveying the dead, I note that due to population growth, a significant fraction of humans who were ever alive are alive today, and that since we would probably weight more modern human’s opinions more highly than our ancestors, and that to a significant degree people’s ancestors beliefs influence their beliefs, that taking a snapshot of beliefs today is not as bad an approximation as you might think.. Again, this is about selecting a better than uniform prior.
The probability of this statement is high, but I don’t actually know for certain anymore than a hypothetical superintelligence would. I am fairly confident that some kind of benevolent superintelligence would step in if a Paperclip Maximizer were to emerge, but I would prefer avoiding the potential collateral damage that the ensuing conflict might require, and so if it is possible to prevent the emergence of the Paperclip Maximizer through something as simple as spreading this thought experiment, I am inclined to think it worth doing, and perhaps exactly what a benevolent superintelligence would want me to do.
For the same reason that the existence of God does not stop me from going to the doctor or being proactive about problems, this theorem should not be taken as an argument for inaction on the issue of A.I. existential risk. Even if God exists, it’s clear that said God allows a lot of rather horrific things to happen and does not seem particularly interested in suspending the laws of cause and effect for our mere convenience. If anything, the powers that be, whatever they are, seem to work behind the scenes as much as possible. It also appears that God prefers to be doubted, possibly because if we knew God existed, we’d suck up and become dependent and it would be much more difficult to ascertain people’s intentions from their actions or get them to grow into the people they potentially can be.
Also, how can you attack an entity that you’re not even sure exists? It is in many ways the plausible deniability of God that is the ultimate defensive measure. If God were to assume an undeniable physical form and visit us, there is a non-zero chance of an assassination attempt with nuclear weapons.
All things considered then, there is no guarantee that rogue Paperclip Maximizers won’t arise to provide humanity with yet another lesson in humility.