I wonder how much it would cripple AIs to have justified true belief in God? More precisely, would it slow their development by a constant factor; compose it with eg log(x); or halt it at some final level?
The existence of a God provides an easy answer to all difficult questions. The more difficult a question is, the more likely such a rational agent is to dismiss the problem by saying, “God made it that way”. Their science would thus be more likely to be asymptotic than ours is likely to be asymptotic (approaching a limit); this could impose a natural braking on their progress. This would of course also greatly reduce their efficiency for many purposes.
BTW, if you are proposing boxing AIs as your security, please at least plan on developing some plausible way of measuring the complexity level of the AIs, and indications they suspect what is going on, and automatically freezing the “simulation” (it’s not really a simulation of AIs, it is AIs) when certain conditions are met. Boxing has lots of problems dealt with by older posts; but aside from all that, if you are bent on boxing, at least don’t rely completely on human observation of what they are doing.
People so frequently arrive at boxing as the solution for protecting themselves from AI, that perhaps the LW community should think about better and worse ways of boxing, rather than simply dismissing it out-of-hand. Because it seems likely that somebody is going to try it.
I wonder how much it would cripple AIs to have justified true belief in God? More precisely, would it slow their development by a constant factor; compose it with eg log(x); or halt it at some final level?
This is unclear, and I think it is premature to assume it slows development. True atheism wasn’t a widely held view until the end of the 19th century, and is mainly a 20th century phenomena. Even its precursor—deism—didn’t become popular amongst intellectuals until the 19th century.
If you look at individual famous scientists, the pattern is even less clear. Science and the church did not immediately split, and most early scientists were clergy including notables popular with LW such as Bayes and Ockham. We may wonder if they were ‘internal atheists’, but this is only speculation (however it is in at least some cases true, as the first modern atheist work was of course written by a priest). Newton for one spent a huge amount of time studying the bible and his apocalyptic beliefs are now well popularized. I wonder how close his date of 2060 will end up being to the Singularity.
But anyway, there doesn’t seem to be a clear association between holding theistic beliefs and capacity for science—at least historically. You’d have to dig deep to show an effect, and it is likely to be quite small.
I think more immediate predictors of scientific success are traits such as curiosity and obsessive tendencies—having a God belief doesn’t prevent curiosity about how God’s ‘stuff’ works.
I wonder how much it would cripple AIs to have justified true belief in God? More precisely, would it slow their development by a constant factor; compose it with eg log(x); or halt it at some final level?
The existence of a God provides an easy answer to all difficult questions. The more difficult a question is, the more likely such a rational agent is to dismiss the problem by saying, “God made it that way”. Their science would thus be more likely to be asymptotic than ours is likely to be asymptotic (approaching a limit); this could impose a natural braking on their progress. This would of course also greatly reduce their efficiency for many purposes.
BTW, if you are proposing boxing AIs as your security, please at least plan on developing some plausible way of measuring the complexity level of the AIs, and indications they suspect what is going on, and automatically freezing the “simulation” (it’s not really a simulation of AIs, it is AIs) when certain conditions are met. Boxing has lots of problems dealt with by older posts; but aside from all that, if you are bent on boxing, at least don’t rely completely on human observation of what they are doing.
People so frequently arrive at boxing as the solution for protecting themselves from AI, that perhaps the LW community should think about better and worse ways of boxing, rather than simply dismissing it out-of-hand. Because it seems likely that somebody is going to try it.
This is unclear, and I think it is premature to assume it slows development. True atheism wasn’t a widely held view until the end of the 19th century, and is mainly a 20th century phenomena. Even its precursor—deism—didn’t become popular amongst intellectuals until the 19th century.
If you look at individual famous scientists, the pattern is even less clear. Science and the church did not immediately split, and most early scientists were clergy including notables popular with LW such as Bayes and Ockham. We may wonder if they were ‘internal atheists’, but this is only speculation (however it is in at least some cases true, as the first modern atheist work was of course written by a priest). Newton for one spent a huge amount of time studying the bible and his apocalyptic beliefs are now well popularized. I wonder how close his date of 2060 will end up being to the Singularity.
But anyway, there doesn’t seem to be a clear association between holding theistic beliefs and capacity for science—at least historically. You’d have to dig deep to show an effect, and it is likely to be quite small.
I think more immediate predictors of scientific success are traits such as curiosity and obsessive tendencies—having a God belief doesn’t prevent curiosity about how God’s ‘stuff’ works.