Given that you have 2000 karma I don’t think you are truely an outsider.
You know, Israel defines a Jew to be someone who considers themselves a Jew.
I ignore karma. I am not convinced by the idea of rationality or consistent beliefs. I am not a Bayesian with a capital B. I don’t subscribe to the Everett interpretation of quantum mechanics. I am not an atheist. I don’t believe UFAI is an issue worth spending a lot of resources on at the moment. I attended a total of 1 LW meetup (in Boston—I think Scott Aaronson and Michael Vassar were there).
I do think LW is a good and valuable community, and I think there are many very useful concepts in circulation here (for example tabooing and steelmanning—these are useful enough to have been reinvented elsewhere), which is why I participate here. Also some folks connected to LW think about and write about interesting things.
Could you be more specific? What behaviors are you talking about?
A guru speaks from a position of authority, a scientist communicates/argues with peers. I think the modern academic approach has a lot less failure modes than the guru approach (which has been tried extensively in the past of our species).
edit: To clarify my thinking a bit. A “guru” is a kind of memetic feudal lord. I am suspicious of attempts to revert to feudalism, our species does it far too easily. I think we can do better than feudal forms.
Being an outsider is more than just not subscribing to the label of belonging to an ingroup.
I don’t think disagreeing on things like consistent beliefs makes you an outsider. I think the last longer post on the topic even argued against having consistent beliefs.
I don’t think what makes a true citizen of Lesswrong is that a person treats Eliezer as his guru and simply copies his beliefs.
If you decide that this community is good, that you find participation valuable and think for yourself that makes you perfect member.
I am not an atheist.
Then what are you and if you already like religion why do you see a problem with the same pattern appearing in LW?
A guru speaks from a position of authority, a scientist communicates/argues with peers.
I don’t think it’s useful to pretend that everyone understand what you mean with a concept. It can seem authoritative to say that the person you are talking to just doesn’t understand what you mean, but it often directly addresses the core issue of a disagreement.
Communication is also not something where you have to pick one style for all your communication needs. One day you can be more intellectual and the other day you can use more simple language.
Absolutely—I think things like (a)theism, and things like interpretations of QM are “questions of taste.” I think it is a waste of time to argue about taste. I also think that tolerance of diverse tastes that agree on all empirical predictions (and agree that empirical predictions is how we go about evaluating things) has advantages.
Thanks. Outside of communities that entertain ideas such as acausal trade and ancestor simulations, I mostly interpret “atheism” to be an imprecise but useful term to communicate the beliefs that (a) any given religion has a negligible probability of being true, and that (b) empirical predictions is how we should go about evaluating things.
Atheism is commonly interpreted as “I know there are no gods”.
Such a distinction is technically correct and appropriate within communities such as this one. But under most circumstances it amounts to the kind of hairsplitting that the average person does not understand. Yes, it is possible that gods exist, or that Catholicism is true. But these possibilities are unlikely enough, or practically irrelevant enough, that most of the time it is appropriate to communicate “I know there are no gods”.
Even here I find it very strange if someone argues that he is not an atheist based on hairsplitting arguments such as that 0 is not a probability or that we might live in a simulation.
Of course I agree that technically atheism is as irrational as believing that Jehovah exists with probability 1.
Dude the difference between “meaningless question” and “no Gods” is not hairsplitting, it’s epistemology vs ontology. Do you really not see the difference?
I am about as interested in what a young earther thinks about God as what Aristotle thinks about acceleration. It is bad hygiene to throw out a concept because someone screwed it up badly.
Dude the difference between “meaningless question” and “no Gods” is not hairsplitting, it’s epistemology vs ontology. Do you really not see the difference?
Philosophically it is not hairsplitting. In other words, if you are a philosopher, then in the context of doing philosophy, it is of practical importance to make this distinction. But in most contexts it seems meaningless to make such a distinction. In most contexts it would amount to hairsplitting, because it would make a distinction that’s too fine to have practical consequences.
I am about as interested in what a young earther thinks about God as what Aristotle thinks about acceleration. It is bad hygiene to throw out a concept because someone screwed it up badly.
Your resources are limited. You have to constantly choose who you are listening to, and who you should ignore. It is possible that given certain goals (e.g. studying religion or psychology), it would make sense to listen to a young earth creationist.
One of the worst habits that LessWrong features is taking ideas too seriously. Any agent whose resources are limited is forced to use crude heuristics to filter out nonsense (such as basilisks).
But in most contexts it seems meaningless to make such a distinction. In most contexts it would amount to
hairsplitting, because it would make a distinction that’s too fine to have practical consequences.
If the distinction between what’s out there, and what your beliefs are is too fine for a person, that person can be put to a better use than talking about God, because talking about God is above their pay grade.
Atheists don’t get to appropriate people who disagree with them. It will just annoy people, and end up being counterproductive.
Your resources are limited.
Perhaps my resources are less limited than yours, in the sense that I am perfectly happy to listen to anyone who has something interesting to say, whether they put up a political banner on their beliefs you are happy with, or not. I like history in general, and I have a lot of respect for many religious thinkers, or thinkers who were motivated by religious questions. At one point the vast majority of the world’s smart people were affiliated with a religion in some way.
If the distinction between what’s out there, and what your beliefs are is too fine for a person, that person can be put to a better use than talking about God, because talking about God is above their pay grade.
Let’s say my set of beliefs is exactly the same as yours, except that I also believe into an alien named Bob, which exists outside of the observable universe. Then my set of beliefs is too “fine”, in the sense that it makes unnecessarily detailed assumptions about what’s out there. I am not able to verify such assumptions in any meaningful way.
Perhaps my resources are less limited than yours, in the sense that I am perfectly happy to listen to anyone who has something interesting to say,...
I should probably have chosen websites instead of people. If you want to learn “what’s out there”, by browsing webpages, then you need to adopt some sort of heuristic that filters out the most promising results. Simply because you would never be able to read all webpages, as webpages are likely created at a faster pace than the resources you have to read them.
This means that you can’t afford to muse that someone who seems crazy might actually have it all figured out. Talking to the crazy guy would be the last resort, when nothing else worked.
I am calling for tolerance of anyone who agrees with empiricism as a method for getting things done. That is, say there is a set of people:
Daniel, David, Thomas, Will, Albert, John.
Daniel, and David are atheists. Daniel is a hardcore reductionist, David thinks there is a hard problem of consciousness to explain, and so retreats to a version of dualism.
Thomas is agnostic. He is not sure if God or gods exist or not, nor is he willing to take a stance on this issue. He’s happy with the scientific approach to exploring the unknown.
Albert, Will and John are theists. Albert thinks there is a creator God, but he left the universe completely alone to run on natural laws. Will thinks there is a God or gods, and moreover they interact with the universe, but not in a way that empiricist methods can catch (for whatever reason—perhaps caprice or some purpose). John believes in God, and furthermore his religious beliefs cause him to believe that we should not vaccinate people against diseases at a young age. Furthermore, he does not believe in evolution.
The only person I have a problem with in this set is John. As long as we all agree on all logical consequences of a reasonable set of beliefs that make bridges fly and planes stay up, so to speak, I am not sure it is useful or polite to insist on anything else.
In other words, if you want to call the anti-vaccine people out for being idiots, great! That’s useful. If you want to push the frontiers of science forward, great! That’s useful. If you want to argue with agnostics or theists of the Albert or Will variety, well, I think you need a better hobby.
If you like, you can justify this call for tolerance as a call for “maintaining the fidelity of the posterior distribution.”
As long as we all agree on all logical consequences of a reasonable set of beliefs that make bridges fly and planes stay up, so to speak, I am not sure it is useful or polite to insist on anything else.
Well, what do we do about values, then? Specifically, about society norms which are codified and enforced as laws?
Where would you fit in the typical MIRI donor here?
As long as we all agree on all logical consequences of a reasonable set of beliefs that make bridges fly and planes stay up, so to speak, I am not sure it is useful or polite to insist on anything else.
MIRI’s mission to build an FAI is a good way to think about this. Given a singleton, an all powerful machine dictator, would you want it to be like any of the people you described? If some of those people would be better leaders than others, then why wouldn’t you, to a lesser extent, insist on them becoming more like someone who you would readily empower to rule you?
Personally I wouldn’t feel comfortable entrusting any of the people you describe, given unlimited power. Neither would I trust any MIRI staff, or myself. All seem flawed in more or less subtle ways.
Regarding logical consequences, concepts such as acausal trade might very well be logical consequences of a reasonable set of beliefs that make bridges stay up, and planes fly. Yet what makes LessWrong partly awful is that all logical consequences are taken seriously. I do insist on somehow discounting these consequences, because it is unworkable, and dangerously distracting, to worry about such possibilities as e.g. a simulation shutdown. In other words, I wouldn’t entrust an FAI that would give money to a Pascalian mugger, or even one which took basilisks seriously.
I think you are going on a tangent. We are talking about beliefs, not values. I think we can all generally agree on a reasonable set of things we all think is bad, and we should insist people agree to respect those things. But why should we shun Will or Albert if they have a reasonable ethical system?
Regarding logical consequences, concepts such as acausal trade might very well be logical consequences
of a reasonable set of beliefs that make bridges stay up, and planes fly.
Sorry, but no. In order for acausal trade, basilisks, etc. to logically follow from the “reasonable set of things describing modern empirical science + math” it would have to be the case that any model (in the model theoretic sense, that is a universe we construct) consistent with the latter also contains the former. That just isn’t so.
We should take all logical consequences we can compute of things we know seriously. The entire trouble with basilisks et al. is precisely that they don’t logically follow, but are taken seriously anyways. Only concentrating on one untestable possibility out of great many is precisely what my call for tolerance for views on untestable things is meant to combat. A culture that agrees only on what we can test, and lets your mind wander about other matters will be resistant to things like basilisks simply because most members of such a culture will believe something else, and give you other convincing possibilities (and you will be unable to choose since they are all untestable anyways).
I think we can all generally agree on a reasonable set of things we all think is bad, and we should insist people agree to respect those things. But why should we shun Will or Albert if they have a reasonable ethical system?
[...]
We should take all logical consequences we can compute of things we know seriously. The entire trouble with basilisks et al. is precisely that they don’t logically follow, but are taken seriously anyways.
I am not sure I understand you here. Should we shun people who believe that the most probable model consistent with “a reasonable set of things describing modern empirical science + math” contains basilisks etc.? Or should we respect them, and be content with the possibility that their worldview might spread, and eventually dominate a certain influential subset of humanity?
What reasonable ethical system do you have in mind which could prevent people from taking dangerous actions if they believe Pascal’s mugging, or basilisks, to be a logical consequence that is to be taken seriously?
A culture that agrees only on what we can test, and lets your mind wander about other matters will be resistant to things like basilisks simply because most members of such a culture will believe something else, and give you other convincing possibilities (and you will be unable to choose since they are all untestable anyways).
Suppose there exists a highly effective model, which contains basilisks, but which is consistent with “a reasonable set of things describing modern empirical science + math”. What if this diverse culture was threatened by the propagation this model?
Or should we respect them, and be content with the possibility that their worldview might spread, and
eventually dominate a certain influential subset of humanity?
What if this diverse culture was threatened by the propagation [of] this model?
“Consistent” is a much lower bar to meet than “logically must follow.” Jehova and your green alien Bob are also consistent. Sensible religions are generally consistent.
I call for the spread of the culture of tolerance rather than the culture of religious war. History shows that the culture of tolerance will serve your goals better here. You can always find a boogieman as an excuse to knock heads—be it Scientology, Wahabi Islam, Communism or whatever. But will that help you?
But do note that this subthread is about you asking IlyaShpitser to elaborate on what does his “I’m not an atheist” mean and within this context the distinction might be relevant.
But do note that this subthread is about you asking IlyaShpitser to elaborate on what does his “I’m not an atheist” mean...
Yes, and I gave an explanation (without being asked) of why I asked him to elaborate on it in the first place. My guess was that he simply made this technically correct distinction. But I wanted to make sure that he does mean that he is a theist instead. Since most of the time, when people say that they do not subscribe to atheism, as opposed to saying that they are agnostics, they mean that they hold certain irrational beliefs.
I already said I wasn’t sure about this whole rationality business.
I am not sure of “this whole rationality business” either. But I don’t know what you mean by it. You listed a bunch of points you disagree with. But there are a lot of ways to disagree with all of these points. Some of those possible “disagreements”, such as “but Jehova is the one true god”, are rather weird.
If I said I was a theist, would I be ran out of town?
You are obviously a really smart fellow. It would have been fascinating to learn that you are a theist. That’s all.
I think I prefer Will Newsome’s world to Eliezer Yudkowsky’s world.
I have never been clear what Will Newsome’s world is. Is he writing about it more fully somewhere else? But my almost invariable experience is that things of which I hear tantalising hints turn out, when they turn out to be anything, to be merely interesting-if-true, along with alien abductions, the Loch Ness Monster, and interpretations of quantum mechanics.
Eliezer’s world is as clear and inviting as a summer day in comparison (although I would not extend that to what all of his admirers make of it). ETA: I’m leaving out his views on fooming AI, which I don’t take an interest in even though it’s his entire motivation for creating LessWrong, and MWI, which I don’t consider myself qualified to have opinions about. I’m not signed up for cryonics either.
I have never been clear what Will Newsome’s world is.
Me neither man. There are, like, these gods, right? Or one god-like thingy at least. And also I’m supposed to help some humans build their own new god somehow? Except I don’t really know how the already-present gods feel about that, and at any rate the humans are all kinda crazy and bizarrely terrible at moral philosophy, I guess because whatever process made them apparently wasn’t thinking very far ahead, so instead the humans just sit there metabolizing and ineffectually signaling at each other until they die. It is occasionally beautiful.
Will Newsome’s is a demon-haunted world. But I think he’s still around, and might pipe up himself.
Perhaps a better known person than Will who wrote more would be Phillip K. Dick. Phillip K. Dick saw “something” once (perhaps due to a temporal lobe epilepsy), and spent the rest of his life trying to come to terms with what he saw. His writing is not very clear at all, but that is because he is tackling a very difficult problem.
I’d think the non-cuddly theism of the Will Newsome or Philip K. Dick sort would be sort of like paranoid schizophrenia, but without the consoling part that it’s all just misfirings in your brain and not all actually out there. Not quite sure you’d want to live there, though it might certainly be occasionally more interesting than staid materialism. Muflax used to have a post about something that sounds like that, but it got disappeared.
Pretty much anyone who at some point goes “and therefore it must obviously be that God is benevolent” sounds like a candidate. My vague impression is that a bunch of religious philosophers like Bishop Berkeley and Descartes had arguments you could caricature as “reality might actually be really messed up, so it’s a good thing God has to be benevolent then and see that thing stay fixed up”. Usually only the “reality might be really messed up” part is what stays in the philosophical canon.
Also there’s Raymond Smullyan’s Who Knows? which I read and liked some years ago.
I’ve read a fair amount of Dick, and while the fiction may be entertaining, I can’t take the “something” as anything more significant than the crud you get on your screen if your graphics card goes wrong. It may be very entertaining crud, it may even inspire great art, but in itself it’s of no significance.
I find this view somewhat unempathetic: “this impacted tooth pain is not very significant, it is just a cluster of neurons firing here and also here.” What he saw was significant to him.
A few days ago, for the second time in my life, I had a nested dream. In other words, I dreamed that I was dreaming, that I woke up within a dream. Interestingly, the dream within the dream was, from the perspective of this level of reality, completely sane. While the world I woke up to, within the dream, was very different. I dreamed that I dreamed that our neighbours removed some bushes from their garden. Which they didn’t do on this level. But everything else was seemingly exactly like it is here. But the world to which I dreamed to wake up to was weird (which I was not aware of in there). There was a foggy harbor next to our house, and a big ship was passing through it. Whereas in this level, and the nested level, the sea is far away.
Is this experience significant? Well, it could mean that there are many levels of reality, this just being another one I will wake up from sooner or later. It’s possible. But I just don’t see how it could be reasonable to take this into account when trying to figure out what is out there, as long as more sensible approaches have not been ruled out. Where sensible stands for concrete, specific, lawful, empirical activities that can be falsified in an intersubjective (objective) manner.
Oh yes, it was very significant to him. Jill Bolte Taylor’s stroke was significant to her. Aldous Huxley’s drug experiences were significant to him. John C. Wright’s heart attack was significant to him.
But none of these are significant to me, and the tales they tell are told by compromised witnesses. If brain damage is the entry price for a glimpse of the interesting-if-true things they saw, I’ll pass.
Given that you have 2000 karma I don’t think you are truely an outsider.
The Jusos are the youth organisation of Germany’s SPD, which is the left party that’s currently part of the government.
At one Jusos meeting I attended we sang the Internationale. It’s a ritual. It’s useful for group bonding. That doesn’t make it religious.
Could you be more specific? What behaviors are you talking about?
You know, Israel defines a Jew to be someone who considers themselves a Jew.
I ignore karma. I am not convinced by the idea of rationality or consistent beliefs. I am not a Bayesian with a capital B. I don’t subscribe to the Everett interpretation of quantum mechanics. I am not an atheist. I don’t believe UFAI is an issue worth spending a lot of resources on at the moment. I attended a total of 1 LW meetup (in Boston—I think Scott Aaronson and Michael Vassar were there).
I do think LW is a good and valuable community, and I think there are many very useful concepts in circulation here (for example tabooing and steelmanning—these are useful enough to have been reinvented elsewhere), which is why I participate here. Also some folks connected to LW think about and write about interesting things.
Things like point 1 in this post:
http://lesswrong.com/lw/jne/a_fervent_defense_of_frequentist_statistics/ajwa
A guru speaks from a position of authority, a scientist communicates/argues with peers. I think the modern academic approach has a lot less failure modes than the guru approach (which has been tried extensively in the past of our species).
edit: To clarify my thinking a bit. A “guru” is a kind of memetic feudal lord. I am suspicious of attempts to revert to feudalism, our species does it far too easily. I think we can do better than feudal forms.
Being an outsider is more than just not subscribing to the label of belonging to an ingroup.
I don’t think disagreeing on things like consistent beliefs makes you an outsider. I think the last longer post on the topic even argued against having consistent beliefs.
I don’t think what makes a true citizen of Lesswrong is that a person treats Eliezer as his guru and simply copies his beliefs.
If you decide that this community is good, that you find participation valuable and think for yourself that makes you perfect member.
Then what are you and if you already like religion why do you see a problem with the same pattern appearing in LW?
I don’t think it’s useful to pretend that everyone understand what you mean with a concept. It can seem authoritative to say that the person you are talking to just doesn’t understand what you mean, but it often directly addresses the core issue of a disagreement.
Communication is also not something where you have to pick one style for all your communication needs. One day you can be more intellectual and the other day you can use more simple language.
Would you mind to elaborate on this?
Absolutely—I think things like (a)theism, and things like interpretations of QM are “questions of taste.” I think it is a waste of time to argue about taste. I also think that tolerance of diverse tastes that agree on all empirical predictions (and agree that empirical predictions is how we go about evaluating things) has advantages.
Thanks. Outside of communities that entertain ideas such as acausal trade and ancestor simulations, I mostly interpret “atheism” to be an imprecise but useful term to communicate the beliefs that (a) any given religion has a negligible probability of being true, and that (b) empirical predictions is how we should go about evaluating things.
Typically, atheism is distinguished from agnosticism and what you’re describing is on the agnosticism side of the spectrum.
Atheism is commonly interpreted as “I know there are no gods”.
Such a distinction is technically correct and appropriate within communities such as this one. But under most circumstances it amounts to the kind of hairsplitting that the average person does not understand. Yes, it is possible that gods exist, or that Catholicism is true. But these possibilities are unlikely enough, or practically irrelevant enough, that most of the time it is appropriate to communicate “I know there are no gods”.
Even here I find it very strange if someone argues that he is not an atheist based on hairsplitting arguments such as that 0 is not a probability or that we might live in a simulation.
Of course I agree that technically atheism is as irrational as believing that Jehovah exists with probability 1.
Dude the difference between “meaningless question” and “no Gods” is not hairsplitting, it’s epistemology vs ontology. Do you really not see the difference?
I am about as interested in what a young earther thinks about God as what Aristotle thinks about acceleration. It is bad hygiene to throw out a concept because someone screwed it up badly.
Philosophically it is not hairsplitting. In other words, if you are a philosopher, then in the context of doing philosophy, it is of practical importance to make this distinction. But in most contexts it seems meaningless to make such a distinction. In most contexts it would amount to hairsplitting, because it would make a distinction that’s too fine to have practical consequences.
Your resources are limited. You have to constantly choose who you are listening to, and who you should ignore. It is possible that given certain goals (e.g. studying religion or psychology), it would make sense to listen to a young earth creationist.
One of the worst habits that LessWrong features is taking ideas too seriously. Any agent whose resources are limited is forced to use crude heuristics to filter out nonsense (such as basilisks).
If the distinction between what’s out there, and what your beliefs are is too fine for a person, that person can be put to a better use than talking about God, because talking about God is above their pay grade.
Atheists don’t get to appropriate people who disagree with them. It will just annoy people, and end up being counterproductive.
Perhaps my resources are less limited than yours, in the sense that I am perfectly happy to listen to anyone who has something interesting to say, whether they put up a political banner on their beliefs you are happy with, or not. I like history in general, and I have a lot of respect for many religious thinkers, or thinkers who were motivated by religious questions. At one point the vast majority of the world’s smart people were affiliated with a religion in some way.
Let’s say my set of beliefs is exactly the same as yours, except that I also believe into an alien named Bob, which exists outside of the observable universe. Then my set of beliefs is too “fine”, in the sense that it makes unnecessarily detailed assumptions about what’s out there. I am not able to verify such assumptions in any meaningful way.
I should probably have chosen websites instead of people. If you want to learn “what’s out there”, by browsing webpages, then you need to adopt some sort of heuristic that filters out the most promising results. Simply because you would never be able to read all webpages, as webpages are likely created at a faster pace than the resources you have to read them.
This means that you can’t afford to muse that someone who seems crazy might actually have it all figured out. Talking to the crazy guy would be the last resort, when nothing else worked.
I am calling for tolerance of anyone who agrees with empiricism as a method for getting things done. That is, say there is a set of people:
Daniel, David, Thomas, Will, Albert, John.
Daniel, and David are atheists. Daniel is a hardcore reductionist, David thinks there is a hard problem of consciousness to explain, and so retreats to a version of dualism.
Thomas is agnostic. He is not sure if God or gods exist or not, nor is he willing to take a stance on this issue. He’s happy with the scientific approach to exploring the unknown.
Albert, Will and John are theists. Albert thinks there is a creator God, but he left the universe completely alone to run on natural laws. Will thinks there is a God or gods, and moreover they interact with the universe, but not in a way that empiricist methods can catch (for whatever reason—perhaps caprice or some purpose). John believes in God, and furthermore his religious beliefs cause him to believe that we should not vaccinate people against diseases at a young age. Furthermore, he does not believe in evolution.
The only person I have a problem with in this set is John. As long as we all agree on all logical consequences of a reasonable set of beliefs that make bridges fly and planes stay up, so to speak, I am not sure it is useful or polite to insist on anything else.
In other words, if you want to call the anti-vaccine people out for being idiots, great! That’s useful. If you want to push the frontiers of science forward, great! That’s useful. If you want to argue with agnostics or theists of the Albert or Will variety, well, I think you need a better hobby.
If you like, you can justify this call for tolerance as a call for “maintaining the fidelity of the posterior distribution.”
Well, what do we do about values, then? Specifically, about society norms which are codified and enforced as laws?
Where would you fit in the typical MIRI donor here?
MIRI’s mission to build an FAI is a good way to think about this. Given a singleton, an all powerful machine dictator, would you want it to be like any of the people you described? If some of those people would be better leaders than others, then why wouldn’t you, to a lesser extent, insist on them becoming more like someone who you would readily empower to rule you?
Personally I wouldn’t feel comfortable entrusting any of the people you describe, given unlimited power. Neither would I trust any MIRI staff, or myself. All seem flawed in more or less subtle ways.
Regarding logical consequences, concepts such as acausal trade might very well be logical consequences of a reasonable set of beliefs that make bridges stay up, and planes fly. Yet what makes LessWrong partly awful is that all logical consequences are taken seriously. I do insist on somehow discounting these consequences, because it is unworkable, and dangerously distracting, to worry about such possibilities as e.g. a simulation shutdown. In other words, I wouldn’t entrust an FAI that would give money to a Pascalian mugger, or even one which took basilisks seriously.
I think you are going on a tangent. We are talking about beliefs, not values. I think we can all generally agree on a reasonable set of things we all think is bad, and we should insist people agree to respect those things. But why should we shun Will or Albert if they have a reasonable ethical system?
Sorry, but no. In order for acausal trade, basilisks, etc. to logically follow from the “reasonable set of things describing modern empirical science + math” it would have to be the case that any model (in the model theoretic sense, that is a universe we construct) consistent with the latter also contains the former. That just isn’t so.
We should take all logical consequences we can compute of things we know seriously. The entire trouble with basilisks et al. is precisely that they don’t logically follow, but are taken seriously anyways. Only concentrating on one untestable possibility out of great many is precisely what my call for tolerance for views on untestable things is meant to combat. A culture that agrees only on what we can test, and lets your mind wander about other matters will be resistant to things like basilisks simply because most members of such a culture will believe something else, and give you other convincing possibilities (and you will be unable to choose since they are all untestable anyways).
We can? That certainly doesn’t seem to be so.
Also, can you step back a hundred years or so and repeat that? :-)
[...]
I am not sure I understand you here. Should we shun people who believe that the most probable model consistent with “a reasonable set of things describing modern empirical science + math” contains basilisks etc.? Or should we respect them, and be content with the possibility that their worldview might spread, and eventually dominate a certain influential subset of humanity?
What reasonable ethical system do you have in mind which could prevent people from taking dangerous actions if they believe Pascal’s mugging, or basilisks, to be a logical consequence that is to be taken seriously?
Suppose there exists a highly effective model, which contains basilisks, but which is consistent with “a reasonable set of things describing modern empirical science + math”. What if this diverse culture was threatened by the propagation this model?
“Consistent” is a much lower bar to meet than “logically must follow.” Jehova and your green alien Bob are also consistent. Sensible religions are generally consistent.
I call for the spread of the culture of tolerance rather than the culture of religious war. History shows that the culture of tolerance will serve your goals better here. You can always find a boogieman as an excuse to knock heads—be it Scientology, Wahabi Islam, Communism or whatever. But will that help you?
A fair point.
But do note that this subthread is about you asking IlyaShpitser to elaborate on what does his “I’m not an atheist” mean and within this context the distinction might be relevant.
Yes, and I gave an explanation (without being asked) of why I asked him to elaborate on it in the first place. My guess was that he simply made this technically correct distinction. But I wanted to make sure that he does mean that he is a theist instead. Since most of the time, when people say that they do not subscribe to atheism, as opposed to saying that they are agnostics, they mean that they hold certain irrational beliefs.
If I said I was a theist, would I be ran out of town? I already said I wasn’t sure about this whole rationality business.
I am not sure of “this whole rationality business” either. But I don’t know what you mean by it. You listed a bunch of points you disagree with. But there are a lot of ways to disagree with all of these points. Some of those possible “disagreements”, such as “but Jehova is the one true god”, are rather weird.
You are obviously a really smart fellow. It would have been fascinating to learn that you are a theist. That’s all.
I think I prefer Will Newsome’s world to Eliezer Yudkowsky’s world. But this is about my preferences, not about ontology.
I have never been clear what Will Newsome’s world is. Is he writing about it more fully somewhere else? But my almost invariable experience is that things of which I hear tantalising hints turn out, when they turn out to be anything, to be merely interesting-if-true, along with alien abductions, the Loch Ness Monster, and interpretations of quantum mechanics.
Eliezer’s world is as clear and inviting as a summer day in comparison (although I would not extend that to what all of his admirers make of it). ETA: I’m leaving out his views on fooming AI, which I don’t take an interest in even though it’s his entire motivation for creating LessWrong, and MWI, which I don’t consider myself qualified to have opinions about. I’m not signed up for cryonics either.
Me neither man. There are, like, these gods, right? Or one god-like thingy at least. And also I’m supposed to help some humans build their own new god somehow? Except I don’t really know how the already-present gods feel about that, and at any rate the humans are all kinda crazy and bizarrely terrible at moral philosophy, I guess because whatever process made them apparently wasn’t thinking very far ahead, so instead the humans just sit there metabolizing and ineffectually signaling at each other until they die. It is occasionally beautiful.
Will Newsome’s is a demon-haunted world. But I think he’s still around, and might pipe up himself.
Perhaps a better known person than Will who wrote more would be Phillip K. Dick. Phillip K. Dick saw “something” once (perhaps due to a temporal lobe epilepsy), and spent the rest of his life trying to come to terms with what he saw. His writing is not very clear at all, but that is because he is tackling a very difficult problem.
I’d think the non-cuddly theism of the Will Newsome or Philip K. Dick sort would be sort of like paranoid schizophrenia, but without the consoling part that it’s all just misfirings in your brain and not all actually out there. Not quite sure you’d want to live there, though it might certainly be occasionally more interesting than staid materialism. Muflax used to have a post about something that sounds like that, but it got disappeared.
I got a backup here. Screenshot here.
Are there any serious cuddly theists? “He is not a tame lion.”—C.S. Lewis. (I don’t like C.S. Lewis).
Pretty much anyone who at some point goes “and therefore it must obviously be that God is benevolent” sounds like a candidate. My vague impression is that a bunch of religious philosophers like Bishop Berkeley and Descartes had arguments you could caricature as “reality might actually be really messed up, so it’s a good thing God has to be benevolent then and see that thing stay fixed up”. Usually only the “reality might be really messed up” part is what stays in the philosophical canon.
Also there’s Raymond Smullyan’s Who Knows? which I read and liked some years ago.
Perhaps so, but it’s not unpleasant, not for that reason anyway.
I’ve read a fair amount of Dick, and while the fiction may be entertaining, I can’t take the “something” as anything more significant than the crud you get on your screen if your graphics card goes wrong. It may be very entertaining crud, it may even inspire great art, but in itself it’s of no significance.
I find this view somewhat unempathetic: “this impacted tooth pain is not very significant, it is just a cluster of neurons firing here and also here.” What he saw was significant to him.
A few days ago, for the second time in my life, I had a nested dream. In other words, I dreamed that I was dreaming, that I woke up within a dream. Interestingly, the dream within the dream was, from the perspective of this level of reality, completely sane. While the world I woke up to, within the dream, was very different. I dreamed that I dreamed that our neighbours removed some bushes from their garden. Which they didn’t do on this level. But everything else was seemingly exactly like it is here. But the world to which I dreamed to wake up to was weird (which I was not aware of in there). There was a foggy harbor next to our house, and a big ship was passing through it. Whereas in this level, and the nested level, the sea is far away.
Is this experience significant? Well, it could mean that there are many levels of reality, this just being another one I will wake up from sooner or later. It’s possible. But I just don’t see how it could be reasonable to take this into account when trying to figure out what is out there, as long as more sensible approaches have not been ruled out. Where sensible stands for concrete, specific, lawful, empirical activities that can be falsified in an intersubjective (objective) manner.
Oh yes, it was very significant to him. Jill Bolte Taylor’s stroke was significant to her. Aldous Huxley’s drug experiences were significant to him. John C. Wright’s heart attack was significant to him.
But none of these are significant to me, and the tales they tell are told by compromised witnesses. If brain damage is the entry price for a glimpse of the interesting-if-true things they saw, I’ll pass.
No. We’ve had open theists hang around in the past.