But in most contexts it seems meaningless to make such a distinction. In most contexts it would amount to
hairsplitting, because it would make a distinction that’s too fine to have practical consequences.
If the distinction between what’s out there, and what your beliefs are is too fine for a person, that person can be put to a better use than talking about God, because talking about God is above their pay grade.
Atheists don’t get to appropriate people who disagree with them. It will just annoy people, and end up being counterproductive.
Your resources are limited.
Perhaps my resources are less limited than yours, in the sense that I am perfectly happy to listen to anyone who has something interesting to say, whether they put up a political banner on their beliefs you are happy with, or not. I like history in general, and I have a lot of respect for many religious thinkers, or thinkers who were motivated by religious questions. At one point the vast majority of the world’s smart people were affiliated with a religion in some way.
If the distinction between what’s out there, and what your beliefs are is too fine for a person, that person can be put to a better use than talking about God, because talking about God is above their pay grade.
Let’s say my set of beliefs is exactly the same as yours, except that I also believe into an alien named Bob, which exists outside of the observable universe. Then my set of beliefs is too “fine”, in the sense that it makes unnecessarily detailed assumptions about what’s out there. I am not able to verify such assumptions in any meaningful way.
Perhaps my resources are less limited than yours, in the sense that I am perfectly happy to listen to anyone who has something interesting to say,...
I should probably have chosen websites instead of people. If you want to learn “what’s out there”, by browsing webpages, then you need to adopt some sort of heuristic that filters out the most promising results. Simply because you would never be able to read all webpages, as webpages are likely created at a faster pace than the resources you have to read them.
This means that you can’t afford to muse that someone who seems crazy might actually have it all figured out. Talking to the crazy guy would be the last resort, when nothing else worked.
I am calling for tolerance of anyone who agrees with empiricism as a method for getting things done. That is, say there is a set of people:
Daniel, David, Thomas, Will, Albert, John.
Daniel, and David are atheists. Daniel is a hardcore reductionist, David thinks there is a hard problem of consciousness to explain, and so retreats to a version of dualism.
Thomas is agnostic. He is not sure if God or gods exist or not, nor is he willing to take a stance on this issue. He’s happy with the scientific approach to exploring the unknown.
Albert, Will and John are theists. Albert thinks there is a creator God, but he left the universe completely alone to run on natural laws. Will thinks there is a God or gods, and moreover they interact with the universe, but not in a way that empiricist methods can catch (for whatever reason—perhaps caprice or some purpose). John believes in God, and furthermore his religious beliefs cause him to believe that we should not vaccinate people against diseases at a young age. Furthermore, he does not believe in evolution.
The only person I have a problem with in this set is John. As long as we all agree on all logical consequences of a reasonable set of beliefs that make bridges fly and planes stay up, so to speak, I am not sure it is useful or polite to insist on anything else.
In other words, if you want to call the anti-vaccine people out for being idiots, great! That’s useful. If you want to push the frontiers of science forward, great! That’s useful. If you want to argue with agnostics or theists of the Albert or Will variety, well, I think you need a better hobby.
If you like, you can justify this call for tolerance as a call for “maintaining the fidelity of the posterior distribution.”
As long as we all agree on all logical consequences of a reasonable set of beliefs that make bridges fly and planes stay up, so to speak, I am not sure it is useful or polite to insist on anything else.
Well, what do we do about values, then? Specifically, about society norms which are codified and enforced as laws?
Where would you fit in the typical MIRI donor here?
As long as we all agree on all logical consequences of a reasonable set of beliefs that make bridges fly and planes stay up, so to speak, I am not sure it is useful or polite to insist on anything else.
MIRI’s mission to build an FAI is a good way to think about this. Given a singleton, an all powerful machine dictator, would you want it to be like any of the people you described? If some of those people would be better leaders than others, then why wouldn’t you, to a lesser extent, insist on them becoming more like someone who you would readily empower to rule you?
Personally I wouldn’t feel comfortable entrusting any of the people you describe, given unlimited power. Neither would I trust any MIRI staff, or myself. All seem flawed in more or less subtle ways.
Regarding logical consequences, concepts such as acausal trade might very well be logical consequences of a reasonable set of beliefs that make bridges stay up, and planes fly. Yet what makes LessWrong partly awful is that all logical consequences are taken seriously. I do insist on somehow discounting these consequences, because it is unworkable, and dangerously distracting, to worry about such possibilities as e.g. a simulation shutdown. In other words, I wouldn’t entrust an FAI that would give money to a Pascalian mugger, or even one which took basilisks seriously.
I think you are going on a tangent. We are talking about beliefs, not values. I think we can all generally agree on a reasonable set of things we all think is bad, and we should insist people agree to respect those things. But why should we shun Will or Albert if they have a reasonable ethical system?
Regarding logical consequences, concepts such as acausal trade might very well be logical consequences
of a reasonable set of beliefs that make bridges stay up, and planes fly.
Sorry, but no. In order for acausal trade, basilisks, etc. to logically follow from the “reasonable set of things describing modern empirical science + math” it would have to be the case that any model (in the model theoretic sense, that is a universe we construct) consistent with the latter also contains the former. That just isn’t so.
We should take all logical consequences we can compute of things we know seriously. The entire trouble with basilisks et al. is precisely that they don’t logically follow, but are taken seriously anyways. Only concentrating on one untestable possibility out of great many is precisely what my call for tolerance for views on untestable things is meant to combat. A culture that agrees only on what we can test, and lets your mind wander about other matters will be resistant to things like basilisks simply because most members of such a culture will believe something else, and give you other convincing possibilities (and you will be unable to choose since they are all untestable anyways).
I think we can all generally agree on a reasonable set of things we all think is bad, and we should insist people agree to respect those things. But why should we shun Will or Albert if they have a reasonable ethical system?
[...]
We should take all logical consequences we can compute of things we know seriously. The entire trouble with basilisks et al. is precisely that they don’t logically follow, but are taken seriously anyways.
I am not sure I understand you here. Should we shun people who believe that the most probable model consistent with “a reasonable set of things describing modern empirical science + math” contains basilisks etc.? Or should we respect them, and be content with the possibility that their worldview might spread, and eventually dominate a certain influential subset of humanity?
What reasonable ethical system do you have in mind which could prevent people from taking dangerous actions if they believe Pascal’s mugging, or basilisks, to be a logical consequence that is to be taken seriously?
A culture that agrees only on what we can test, and lets your mind wander about other matters will be resistant to things like basilisks simply because most members of such a culture will believe something else, and give you other convincing possibilities (and you will be unable to choose since they are all untestable anyways).
Suppose there exists a highly effective model, which contains basilisks, but which is consistent with “a reasonable set of things describing modern empirical science + math”. What if this diverse culture was threatened by the propagation this model?
Or should we respect them, and be content with the possibility that their worldview might spread, and
eventually dominate a certain influential subset of humanity?
What if this diverse culture was threatened by the propagation [of] this model?
“Consistent” is a much lower bar to meet than “logically must follow.” Jehova and your green alien Bob are also consistent. Sensible religions are generally consistent.
I call for the spread of the culture of tolerance rather than the culture of religious war. History shows that the culture of tolerance will serve your goals better here. You can always find a boogieman as an excuse to knock heads—be it Scientology, Wahabi Islam, Communism or whatever. But will that help you?
If the distinction between what’s out there, and what your beliefs are is too fine for a person, that person can be put to a better use than talking about God, because talking about God is above their pay grade.
Atheists don’t get to appropriate people who disagree with them. It will just annoy people, and end up being counterproductive.
Perhaps my resources are less limited than yours, in the sense that I am perfectly happy to listen to anyone who has something interesting to say, whether they put up a political banner on their beliefs you are happy with, or not. I like history in general, and I have a lot of respect for many religious thinkers, or thinkers who were motivated by religious questions. At one point the vast majority of the world’s smart people were affiliated with a religion in some way.
Let’s say my set of beliefs is exactly the same as yours, except that I also believe into an alien named Bob, which exists outside of the observable universe. Then my set of beliefs is too “fine”, in the sense that it makes unnecessarily detailed assumptions about what’s out there. I am not able to verify such assumptions in any meaningful way.
I should probably have chosen websites instead of people. If you want to learn “what’s out there”, by browsing webpages, then you need to adopt some sort of heuristic that filters out the most promising results. Simply because you would never be able to read all webpages, as webpages are likely created at a faster pace than the resources you have to read them.
This means that you can’t afford to muse that someone who seems crazy might actually have it all figured out. Talking to the crazy guy would be the last resort, when nothing else worked.
I am calling for tolerance of anyone who agrees with empiricism as a method for getting things done. That is, say there is a set of people:
Daniel, David, Thomas, Will, Albert, John.
Daniel, and David are atheists. Daniel is a hardcore reductionist, David thinks there is a hard problem of consciousness to explain, and so retreats to a version of dualism.
Thomas is agnostic. He is not sure if God or gods exist or not, nor is he willing to take a stance on this issue. He’s happy with the scientific approach to exploring the unknown.
Albert, Will and John are theists. Albert thinks there is a creator God, but he left the universe completely alone to run on natural laws. Will thinks there is a God or gods, and moreover they interact with the universe, but not in a way that empiricist methods can catch (for whatever reason—perhaps caprice or some purpose). John believes in God, and furthermore his religious beliefs cause him to believe that we should not vaccinate people against diseases at a young age. Furthermore, he does not believe in evolution.
The only person I have a problem with in this set is John. As long as we all agree on all logical consequences of a reasonable set of beliefs that make bridges fly and planes stay up, so to speak, I am not sure it is useful or polite to insist on anything else.
In other words, if you want to call the anti-vaccine people out for being idiots, great! That’s useful. If you want to push the frontiers of science forward, great! That’s useful. If you want to argue with agnostics or theists of the Albert or Will variety, well, I think you need a better hobby.
If you like, you can justify this call for tolerance as a call for “maintaining the fidelity of the posterior distribution.”
Well, what do we do about values, then? Specifically, about society norms which are codified and enforced as laws?
Where would you fit in the typical MIRI donor here?
MIRI’s mission to build an FAI is a good way to think about this. Given a singleton, an all powerful machine dictator, would you want it to be like any of the people you described? If some of those people would be better leaders than others, then why wouldn’t you, to a lesser extent, insist on them becoming more like someone who you would readily empower to rule you?
Personally I wouldn’t feel comfortable entrusting any of the people you describe, given unlimited power. Neither would I trust any MIRI staff, or myself. All seem flawed in more or less subtle ways.
Regarding logical consequences, concepts such as acausal trade might very well be logical consequences of a reasonable set of beliefs that make bridges stay up, and planes fly. Yet what makes LessWrong partly awful is that all logical consequences are taken seriously. I do insist on somehow discounting these consequences, because it is unworkable, and dangerously distracting, to worry about such possibilities as e.g. a simulation shutdown. In other words, I wouldn’t entrust an FAI that would give money to a Pascalian mugger, or even one which took basilisks seriously.
I think you are going on a tangent. We are talking about beliefs, not values. I think we can all generally agree on a reasonable set of things we all think is bad, and we should insist people agree to respect those things. But why should we shun Will or Albert if they have a reasonable ethical system?
Sorry, but no. In order for acausal trade, basilisks, etc. to logically follow from the “reasonable set of things describing modern empirical science + math” it would have to be the case that any model (in the model theoretic sense, that is a universe we construct) consistent with the latter also contains the former. That just isn’t so.
We should take all logical consequences we can compute of things we know seriously. The entire trouble with basilisks et al. is precisely that they don’t logically follow, but are taken seriously anyways. Only concentrating on one untestable possibility out of great many is precisely what my call for tolerance for views on untestable things is meant to combat. A culture that agrees only on what we can test, and lets your mind wander about other matters will be resistant to things like basilisks simply because most members of such a culture will believe something else, and give you other convincing possibilities (and you will be unable to choose since they are all untestable anyways).
We can? That certainly doesn’t seem to be so.
Also, can you step back a hundred years or so and repeat that? :-)
[...]
I am not sure I understand you here. Should we shun people who believe that the most probable model consistent with “a reasonable set of things describing modern empirical science + math” contains basilisks etc.? Or should we respect them, and be content with the possibility that their worldview might spread, and eventually dominate a certain influential subset of humanity?
What reasonable ethical system do you have in mind which could prevent people from taking dangerous actions if they believe Pascal’s mugging, or basilisks, to be a logical consequence that is to be taken seriously?
Suppose there exists a highly effective model, which contains basilisks, but which is consistent with “a reasonable set of things describing modern empirical science + math”. What if this diverse culture was threatened by the propagation this model?
“Consistent” is a much lower bar to meet than “logically must follow.” Jehova and your green alien Bob are also consistent. Sensible religions are generally consistent.
I call for the spread of the culture of tolerance rather than the culture of religious war. History shows that the culture of tolerance will serve your goals better here. You can always find a boogieman as an excuse to knock heads—be it Scientology, Wahabi Islam, Communism or whatever. But will that help you?