Speaking solely for myself, I’ve found that my spiritual / religious side helps me to set goals and to communicate with my intuitions. Rationality is simply a tool for implementing those goals, and processing/evaluating that intuitive data.
I’ve honestly found the hostility towards “spirituality writ large” here rather confusing, as the majority of the arguments seem to focus on a fairly narrow subset of religious beliefs, primarily Christian. I tend to write it off as a rather understandable bias caused by generalizing from “mainstream Christianity”, though, so it doesn’t really bother me. When people present actual arguments, I do try and listen in case I’ve missed something.
Or, put another way: Rationality is for falsifiable aspects of my life, and spirituality is for the non-falsifiable aspects of my life. I can’t have “incorrect” goals or emotions, but I can certainly fail to handle them effectively.
If ‘spirituality’ helps you to handle these things effectively, that is empirically testable. It is not part of the ‘non-falsifiable’ stuff. In fact, whatever you find useful about ‘spirituality’ is necessarily empirical in nature and thus subject to the same rules as everything else.
Most of the distaste for ‘spirituality’ here comes from a lack of belief in spirits, for which good arguments can be provided if you don’t have one handy. If your ‘spirituality’ has nothing to do with spirits, it should probably be called something else.
Hmmmmm, I’d never considered the idea of trying to falsify my goals and emotions before. Now that the idea has been presented, I’m seeing how I can further integrate my magical and rational thinking, and move to a significantly more effective and rational standpoint.
There are stats on the effects of religion on a population that practices said religion. This should give some indication of the usefulness of any spirituality.
You can have goals that presuppose false beliefs. If I want to get to Heaven, and in fact there is no such place, my goal of getting to Heaven at least closely resembles an “incorrect goal”.
This raises an interesting question—if a Friendly AI or altruistic human wants to help me, and I want to go to Heaven, and the helper does not believe in Heaven, what should it do? So far as I can tell, it should help me get what I would want if I had what the helper considers to be true beliefs.
In a more mundane context, if I want to go north to get groceries, and the only grocery store is to the south, you aren’t helping me by driving me north. If getting groceries is a concern that overrides others, and you can’t communicate with me, you should drive me south to the grocery store even if I claim to want to go north. (If we can exchange evidence about the location of the grocery store, or if I value having true knowledge of what you find if you drive north, things are more complicated, but let’s assume for the purposes of argument that neither of those hold.)
This leads to the practical experiment of asking religious people what they would do differently if their God spoke to them and said “I quit. From now on, the materialists are right, your mind is in your brain, there is no soul, no afterlife, no reincarnation, no heaven, and no hell. If your brain is destroyed before you can copy the information out, you’re gone.” If a religious person says they’d do something ridiculous if God quit, we have a problem when implementing an FAI, since the FAI would either believe in Heaven or be inclined to help religious people do something ridiculous.
So far, I’ve had one Jehovah’s Witness say he couldn’t imagine imagine God quitting. Everyone else said they wouldn’t do much different if God quit.
If you do this experiment, please report back.
It would be a problem if there are many religious people who would apparently want to commit suicide if their God quit, the FAI convinces itself that there is no God, so it helpfully goes and kills them.
Erm, that’s supposing the religious person would actually want to suicide or do the ridiculous thing, rather than this itself being an expression of belief, affirmation, and argument of the religion. (I.e., as appeal to consequences, or saying negative things about the negation.)
Erm, that’s supposing the religious person would actually want to suicide or do the ridiculous thing, rather than this itself being an expression of belief, affirmation, and argument of the religion. (I.e., as appeal to consequences, or saying negative things about the negation.)
The most reasonable interpretation I can find for your statement is that you’re responding to this:
If a religious person says they’d do something ridiculous if God quit, we have a problem when implementing an FAI, since the FAI would either believe in Heaven or be inclined to help religious people do something ridiculous.
I agree, the goal would be to figure out what they would want if their beliefs were revised, and revising their circumstances so that God puts Himself out of the picture isn’t quite the same as that.
The experiment also has other weaknesses:
Ebay bidding shows that many people can’t correctly answer hypothetical questions. Perhaps people will accidentally give false information when I ask.
The question is obviously connected with a project related to athiesm. Perhaps some religious people will give false answers deliberately because they don’t want projects related to athiesm to succeed.
The relevant question is what the FAI thinks they would want if there were no God, not what they think they would want. A decent FAI would be able to do evolutionary psychology and many people can’t, especially religious people who don’t think evolution happened.
It’s not a real experiment. I’m not systematically finding these people, I’m just occasionally asking religious people what they think. There could easily be a selection effect since I’m not asking this question of random religious people.
We are at high risk of arguing about words, and I don’t wish to do that.
Describe specifically what you do when you’re using your spiritual side. Assign it a label other than “spirituality” or “religious”. Then I can give you my opinion. As stated your comment is noise.
You can have incorrect subgoals in that they fail to help you achieve the goals towards which they are supposed to aim.
According to one popular view, you can have incorrect emotions—and this is important, as our emotions have a great deal to do with our ability to be rational. To quote:
Relinquish the emotion which rests upon a mistaken belief, and seek to feel fully that emotion which fits the facts. If the iron approaches your face, and you believe it is hot, and it is cool, the Way opposes your fear. If the iron approaches your face, and you believe it is cool, and it is hot, the Way opposes your calm. Evaluate your beliefs first and then arrive at your emotions. Let yourself say: “If the iron is hot, I desire to believe it is hot, and if it is cool, I desire to believe it is cool.” Beware lest you become attached to beliefs you may not want.
I can’t have “incorrect” goals or emotions, but I can certainly fail to handle them effectively.
Maybe you disagree, but from what I’ve seen, a large subset of the LW population thinks that both goals and emotions can and should be modified if they are sub-optimal.
I can see handoflixue’s logic, and your appeal to popularity does not defeat it. It makes LW seem to be irrational. To directly answer the logic, remind handoflixue that goals form a hierarchy of goals and subgoals, and a subgoal can be incorrect relative to a goal. Similarly, emotions can be subservient to goals. For example, anger can serve the goal of self-protection. A specific feeling of anger can then be judged as correct or incorrect depending on whether it serves this goal.
Finally, all of our conscious goals can be judged from the standpoint of natural selection. And conversely, a person may judge natural selection from the point of view of his conscious goals.
To directly answer the logic, remind handoflixue that goals form a hierarchy of goals and subgoals, and a subgoal can be incorrect relative to a goal.
That...seems true. I guess I’ve never divided my goals into a hierarchy, and I often find my emotions annoying and un-useful. I think my comment holds more true for emotions than for goals, anyway. I’ll have to think about this for a while. It’s true that although I have tried to modify my top-level goals in the past, I don’t necessarily do it because of rationality.
Speaking solely for myself, I’ve found that my spiritual / religious side helps me to set goals and to communicate with my intuitions. Rationality is simply a tool for implementing those goals, and processing/evaluating that intuitive data.
I’ve honestly found the hostility towards “spirituality writ large” here rather confusing, as the majority of the arguments seem to focus on a fairly narrow subset of religious beliefs, primarily Christian. I tend to write it off as a rather understandable bias caused by generalizing from “mainstream Christianity”, though, so it doesn’t really bother me. When people present actual arguments, I do try and listen in case I’ve missed something.
Or, put another way: Rationality is for falsifiable aspects of my life, and spirituality is for the non-falsifiable aspects of my life. I can’t have “incorrect” goals or emotions, but I can certainly fail to handle them effectively.
If ‘spirituality’ helps you to handle these things effectively, that is empirically testable. It is not part of the ‘non-falsifiable’ stuff. In fact, whatever you find useful about ‘spirituality’ is necessarily empirical in nature and thus subject to the same rules as everything else.
Most of the distaste for ‘spirituality’ here comes from a lack of belief in spirits, for which good arguments can be provided if you don’t have one handy. If your ‘spirituality’ has nothing to do with spirits, it should probably be called something else.
Hmmmmm, I’d never considered the idea of trying to falsify my goals and emotions before. Now that the idea has been presented, I’m seeing how I can further integrate my magical and rational thinking, and move to a significantly more effective and rational standpoint.
Thank you!
Glad to be of help :)
There are stats on the effects of religion on a population that practices said religion. This should give some indication of the usefulness of any spirituality.
You can have goals that presuppose false beliefs. If I want to get to Heaven, and in fact there is no such place, my goal of getting to Heaven at least closely resembles an “incorrect goal”.
This raises an interesting question—if a Friendly AI or altruistic human wants to help me, and I want to go to Heaven, and the helper does not believe in Heaven, what should it do? So far as I can tell, it should help me get what I would want if I had what the helper considers to be true beliefs.
In a more mundane context, if I want to go north to get groceries, and the only grocery store is to the south, you aren’t helping me by driving me north. If getting groceries is a concern that overrides others, and you can’t communicate with me, you should drive me south to the grocery store even if I claim to want to go north. (If we can exchange evidence about the location of the grocery store, or if I value having true knowledge of what you find if you drive north, things are more complicated, but let’s assume for the purposes of argument that neither of those hold.)
This leads to the practical experiment of asking religious people what they would do differently if their God spoke to them and said “I quit. From now on, the materialists are right, your mind is in your brain, there is no soul, no afterlife, no reincarnation, no heaven, and no hell. If your brain is destroyed before you can copy the information out, you’re gone.” If a religious person says they’d do something ridiculous if God quit, we have a problem when implementing an FAI, since the FAI would either believe in Heaven or be inclined to help religious people do something ridiculous.
So far, I’ve had one Jehovah’s Witness say he couldn’t imagine imagine God quitting. Everyone else said they wouldn’t do much different if God quit.
If you do this experiment, please report back.
It would be a problem if there are many religious people who would apparently want to commit suicide if their God quit, the FAI convinces itself that there is no God, so it helpfully goes and kills them.
Erm, that’s supposing the religious person would actually want to suicide or do the ridiculous thing, rather than this itself being an expression of belief, affirmation, and argument of the religion. (I.e., as appeal to consequences, or saying negative things about the negation.)
The most reasonable interpretation I can find for your statement is that you’re responding to this:
I agree, the goal would be to figure out what they would want if their beliefs were revised, and revising their circumstances so that God puts Himself out of the picture isn’t quite the same as that.
The experiment also has other weaknesses:
Ebay bidding shows that many people can’t correctly answer hypothetical questions. Perhaps people will accidentally give false information when I ask.
The question is obviously connected with a project related to athiesm. Perhaps some religious people will give false answers deliberately because they don’t want projects related to athiesm to succeed.
The relevant question is what the FAI thinks they would want if there were no God, not what they think they would want. A decent FAI would be able to do evolutionary psychology and many people can’t, especially religious people who don’t think evolution happened.
It’s not a real experiment. I’m not systematically finding these people, I’m just occasionally asking religious people what they think. There could easily be a selection effect since I’m not asking this question of random religious people.
We are at high risk of arguing about words, and I don’t wish to do that.
Describe specifically what you do when you’re using your spiritual side. Assign it a label other than “spirituality” or “religious”. Then I can give you my opinion. As stated your comment is noise.
You can have incorrect subgoals in that they fail to help you achieve the goals towards which they are supposed to aim.
According to one popular view, you can have incorrect emotions—and this is important, as our emotions have a great deal to do with our ability to be rational. To quote:
This comment was also quite helpful :)
Maybe you disagree, but from what I’ve seen, a large subset of the LW population thinks that both goals and emotions can and should be modified if they are sub-optimal.
I can see handoflixue’s logic, and your appeal to popularity does not defeat it. It makes LW seem to be irrational. To directly answer the logic, remind handoflixue that goals form a hierarchy of goals and subgoals, and a subgoal can be incorrect relative to a goal. Similarly, emotions can be subservient to goals. For example, anger can serve the goal of self-protection. A specific feeling of anger can then be judged as correct or incorrect depending on whether it serves this goal.
Finally, all of our conscious goals can be judged from the standpoint of natural selection. And conversely, a person may judge natural selection from the point of view of his conscious goals.
That...seems true. I guess I’ve never divided my goals into a hierarchy, and I often find my emotions annoying and un-useful. I think my comment holds more true for emotions than for goals, anyway. I’ll have to think about this for a while. It’s true that although I have tried to modify my top-level goals in the past, I don’t necessarily do it because of rationality.