Except I don’t read the post as endorsing testing whether religion is good or bad for X, but rather saying that if a simulation showed we’d be better off with religious beliefs we’d be better off adopting them. There’s a number of reasons why this seems like a bad idea:
First, your simulation can only predict whether religion is better for you under the circumstances that you simulate. For instance, suppose you run a simulation of the entire planet earth, but nothing much beyond it. The simulation shows better outcomes for you with religious faith. You then modify yourself to have religious faith. An extinction level asteroid comes hurtling towards earth. Previously, you would have tried to work out the best strategy to divert its course. Now you sit and pray that it goes off course instead.
Let’s say you simulate yourself for twenty years under various religious beliefs, and under athiesm, and one of the simulations leads to a better outcome. You alter yourself to adopt this faith. You’ve now poisoned your ability to conduct similar tests in future. Perhaps a certain religion is better for the first year, or five years, or five thousand. Perhaps beyond that time it no longer is. Because you have now altered your beliefs to rest upon faith rather than upon testing you can no longer update yourself out of this religious state because you will no longer test to see whether it is optimal.
While true, I doubt the first effect would be significant. You’re not very likely to be the one responsible for saving the earth and, singularity aside, terrestrial effects are likely to be far more important to you.
Contrawise, if you were capable of running a simulation, the odds of your input being relevant for existential are much higher. You might be running the simulation to help other people decide whether or not to be religious, or whether to persuade others to be religious, but then it becomes a lot more likely that the combined reduction in epistemic rationality would become an existential issue.
Except I don’t read the post as endorsing testing whether religion is good or bad for X, but rather saying that if a simulation showed we’d be better off with religious beliefs we’d be better off adopting them. There’s a number of reasons why this seems like a bad idea:
First, your simulation can only predict whether religion is better for you under the circumstances that you simulate. For instance, suppose you run a simulation of the entire planet earth, but nothing much beyond it. The simulation shows better outcomes for you with religious faith. You then modify yourself to have religious faith. An extinction level asteroid comes hurtling towards earth. Previously, you would have tried to work out the best strategy to divert its course. Now you sit and pray that it goes off course instead.
Let’s say you simulate yourself for twenty years under various religious beliefs, and under athiesm, and one of the simulations leads to a better outcome. You alter yourself to adopt this faith. You’ve now poisoned your ability to conduct similar tests in future. Perhaps a certain religion is better for the first year, or five years, or five thousand. Perhaps beyond that time it no longer is. Because you have now altered your beliefs to rest upon faith rather than upon testing you can no longer update yourself out of this religious state because you will no longer test to see whether it is optimal.
While true, I doubt the first effect would be significant. You’re not very likely to be the one responsible for saving the earth and, singularity aside, terrestrial effects are likely to be far more important to you.
Contrawise, if you were capable of running a simulation, the odds of your input being relevant for existential are much higher. You might be running the simulation to help other people decide whether or not to be religious, or whether to persuade others to be religious, but then it becomes a lot more likely that the combined reduction in epistemic rationality would become an existential issue.