It may just be me, but why do you need to find someone to follow?
I have always found that forging my own path through the wilderness to be far more enjoyable and yield far greater rewards that following a path, no matter how small or large that path may be.
Well, one reason why I feel that I need someone to follow is… severe underconfidence in my ability to make decisions on my own. I’m still working on that. Choosing a person to follow, and then following them, feels a whole lot easier than forging my own path.
I should mention again that I’m not actually “following” Eliezer in the traditional sense. I used his value system to bootstrap my own value system, greatly simplifying the process of recovering from christianity. But now that I’ve mostly finished with that (or maybe I’m still far from finished?), I am, in fact, starting to think independently. It’s taking a long time for me to do this, but I am constantly looking for things that I’m doing or believing just because someone else told me to, and then reconsidering whether these things are a good idea, according to my current values and beliefs. And yes, there are some things I disagree with Eliezer about (the “true ending” to TWC, for example), and things that I disagree with SIAI about (“we’re the only place worth donating to”, for example). I’ll probably start writing more about this, now that I’m starting to get over my irrational fear of posting comments here.
Though part of me is still worried about making SIAI look bad. And I’m still worried that the stuff I’ve already posted may end up harming SIAI’s mission (and my mission) more than it could possibly have helped. Though of course it would be a bad idea to try to hide problems that need to be examined and dealt with. And the idea of deliberately trying to hide information just feels wrong. It feels like Dark Arts. I should also mention that the idea of deliberately not saying things, in order to avoid making the group look bad, isn’t actually something I was told by anyone from SIAI, I think it was a bad habit I brought with me from christianity.
And the idea of deliberately trying to hide information just feels wrong. It feels like Dark Arts.
If by ‘dark arts’ you mean ‘non-rational methods of persuasion’, such things may be ethically questionable (in general; not volunteering information you aren’t obligated to provide almost certainly isn’t) but are not (categorically) wrong. Rational agents win.
...promoting less than maximally accurate beliefs is an act of sabotage. Don’t do it to anyone unless you’d also slash their tires, because they’re Nazis or whatever. Specifically, don’t do it to yourself.
I think it’s worth distinguishing between “underconfidence” and “lack of confidence”—the former implies the latter (although not absolutely), but under some circumstances you are justified in questioning your competence. Either way, it sounds like you’re working on both ends of that balance, which is good.
Though part of me is still worried about making SIAI look bad. And I’m still worried that the stuff I’ve already posted may end up harming SIAI’s mission (and my mission) more than it could possibly have helped. Though of course it would be a bad idea to try to hide problems that need to be examined and dealt with. And the idea of deliberately trying to hide information just feels wrong. It feels like Dark Arts. I should also mention that the idea of deliberately not saying things, in order to avoid making the group look bad, isn’t actually something I was told by anyone from SIAI, I think it was a bad habit I brought with me from christianity.
That puts it into an understandable context… I can’t quite understand about the having to shake off Christian Beliefs. I was raised with a tremendously religious mother, but about the age of 6 I began to question her beliefs and by 14 was sure that she was stark raving mad to believe what she did. So, I managed to keep from being brainwashed to begin with.
I’ve seen the results of people who have been brainwashed and who have not managed to break completely free from their old beliefs. Most of them swung back and forth between the extremes of bad belief systems (From born-again Christian to Satanist, and back, many times)… So, what you are doing is probably best for the time being, until you learn the tools needed to step off into the wilderness by yourself.
In my case, I knew pretty much from the beginning that something was seriously wrong. But since every single person I had ever met was a christian (with a couple of exceptions I didn’t realize until later), I assumed that the problem was with me. The most obvious problem, at least for me, was that none of the so-called christians was able to clearly explain what a christian is, and what it is that I need to do in order to not go to hell. And the people who came closest to being able to give a clear explanation, they were all different from each other, and the answer changed if I asked different questions. So I guess I was… partly brainwashed. I knew that there was something really important I was supposed to do, and that people’s souls were at stake (a matter of infinite utility/anti-utility!) but noone was able to clearly explain what it was that I was supposed to do. But they expected me to do it anyway, and made it sound like there was something wrong with me for not instinctively knowing what it was that I was supposed to do. There’s lots more I could complain about, but I guess I had better stop now.
So it was pretty obvious that I wasn’t going to be able to save anyone’s soul by converting them to christianity by talking to them. And I was also similarly unqualified for most of the other things that christians are supposed to do. But there was still one thing I saw that I could do: living as cheaply as possible, and donating as much money as possible to the church so that the people who claim to actually know what they’re doing can just get on with doing it. And just being generally helpful when there was some simple everyday thing I could be helpful with.
Anyway, it wasn’t until I went to university that I actually met any atheists who openly admitted to being atheists. Before then, I had heard that there was such a thing as an atheist, and that these were the people whose souls we were supposed to save by converting them to christianity, but Pascal’s Wager prevented me from seriously considering becoming an atheist myself. Even if you assign a really tiny probability to christianity being true, converting to atheism seemed like an action with an expected utility of negative infinity. But then I overheard a conversation in the Computer Science students’ lounge. That-guy-who-isn’t-all-that-smart-but-likes-to-sound-smart-by-quoting-really-smart-people was quoting Eliezer Yudkowsky. Almost immediately after that conversation, I googled the things he was talking about. I discovered Singularitarianism. An atheistic belief system, based entirely on a rational, scientific worldview, to which Pascal’s Wager could be applied. (there is an unknown probability that this universe can support an infinite amount of computation, therefore there is an unknown probability that actions can have infinite positive or negative utility.) I immediately realized that I wanted to convert to this belief system. But it took me a few weeks of swinging back and forth before I finally settled on Singularitarianism. And since then I haven’t had any desire at all to switch back to christianity. Though I was afraid that, because of my inability to stand up to authority figures, someone might end up convincing me to convert back to christianity against my will. Even now, years later, there are scary situations, when dealing with an authority figure who is a christian, part of me still sometimes thinks “OMG maybe I really was wrong about all this!”
Anyway, I’m still noticing bad habits from christianity that I’m still doing, and I’m still working on fixing this. Also, I might be oversensitive to noticing things that are similar between christianity and Singularitarianism. For example, the expected utility of “converting” someone to Singularitarianism. Though in this case you’re not guaranteeing that one soul is saved, you’re slightly increasing the probability that everyone gets “saved”, because there is now one more person helping the efforts to help us achieve a positive Singularity.
Oh, and now, after reading LW, I realize what’s wrong with Pascal’s Wager, and even if I found out for certain that this universe isn’t capable of supporting an infinite amount of computation, I still wouldn’t be tempted to convert back to christianity.
Random trivia: I sometimes have dreams where a demon, or some entirely natural thing that for some reason is trying to look like a demon, is trying to trick or scare me into converting back to christianity. And then I discover that the “demon” was somehow sent by someone I know, and end up not falling for it. I find this amusingly ironic.
As usual, there’s lots more I could write about, but I guess I had better stop writing for now.
But it took me a few weeks of swinging back and forth before I finally settled on Singularitarianism.
Here’s a quote from an old revision of Wikipedia’s entry on The True Believer that may be relevant here:
A core principle in the book is Hoffer’s insight that mass movements are interchangeable; he notes fanatical Nazis later becoming fanatical Communists, fanatical Communists later becoming fanatical anti-Communists, and Saul, persecutor of Christians, becoming Paul, a fanatical Christian. For the true believer the substance of the mass movement isn’t so important as that he or she is part of that movement.
And from the current revision of the same article:
Hoffer quotes extensively from leaders of the Nazi and communist parties in the early part of the 20th Century, to demonstrate, among other things, that they were competing for adherents from the same pool of people predisposed to support mass movements. Despite the two parties’ fierce antagonism, they were more likely to gain recruits from their opposing party than from moderates with no affiliation to either.
Thanks for the link, and the summary. Somehow I don’t find that at all surprising… but I still haven’t found any other cause that I consider worth converting to.
At the time I converted, Singularitarianism was nowhere near a mass movement. It consisted almost entirely of the few of us in the SL4 mailing list. But maybe the size of the movement doesn’t actually matter.
And it’s not “being part of a movement” that I value, it’s actually accomplishing something important. There is a difference between a general pool of people who want to be fanatical about a cause, just for the emotional high, and the people who are seriously dedicated to the cause itself, even if the emotions they get from their involvement are mostly negative. This second group is capable of seriously examining their own beliefs, and if they realize that they were wrong, they will change their beliefs. Though as you just explained, the first group is also capable of changing their minds, but only if they have another group to switch to, and they do this mostly for social reasons.
Seriously though, the emotions I had towards christianity were mostly negative. I just didn’t fit in with the other christians. Or with anyone else, for that matter. And when I converted to Singularitarianism, I didn’t exactly get a warm welcome. And when I converted, I earned the disapproval of all the christians I know. Which is pretty much everyone I have ever met in person. I still have not met any Singularitarian, or even any transhumanist, in person. And I’ve only met a few atheists. I didn’t even have much online interaction with other transhumanists or Singularitarians until very recently. I tried to hang out in the SL4 chatroom a few years ago, but they were openly hostile to the way I treated Singularitarianism as another belief system to convert to, another group to be part of, rather than… whatever it is that they thought they were doing instead. And they didn’t seem to have a high opinion of social interaction in general. Or maybe I’m misremembering this.
Anyway, I spent my first approximately 7 years as a Singularitarian in almost complete isolation. I was afraid to request social interaction for the sake of social interaction, because somehow I got the idea that every other Singularitarian was so totally focused on the mission that they didn’t have any time at all to spare to help me feel less lonely, and so I should either just put up with the loneliness or deal with it on my own, without bothering any of the other Singularitarians for help. The occasional attempt I made to contact some of the other Singularitarians only further confirmed this theory. I chose the option of just putting up with the loneliness. That may have been a bad decision.
And just a few weeks ago, I found out that I’m “a valued donor”, to SIAI. Though I’m still not sure what this means. And I found out that other Singularitarians do, in fact, socialize just for the sake of socializing. And I found out that most of them spend several hours a day “goofing off”. And that they spend a significant percentage of their budget on luxuries that technically they could do without, without having a significant effect on their productivity. And that most of them live generally happy, productive, and satisfying lives. And that it was silly of me to feel guilty for every second and every penny that I wasted on anything that wasn’t optimally useful for the mission. In addition to the usual reasons why feeling guilty is counterproductive
Anyway, things are finally starting to get better now, and I don’t think I’ll accomplish anything by complaining more.
Also, most of this was probably my own fault. It turns out that everyone living at the SIAI house was totally unaware of my situation. And this is mostly my fault, because I was deliberately avoiding contacting them, because I was afraid to waste their time. And wasting the time of some one who’s trying to save the universe is a big no-no. I was also afraid that if I tried to contact them, then they would ask me to do things that I wasn’t actually able to do, but wouldn’t know for sure that I wasn’t able to do, and would try anyway because I felt like giving up wasn’t an option. And it turns out this is exactly what happened. A few months ago I contacted Michael Vassar, and he started giving me things to help with. I made a terrible mess out of trying to arrange the flights for the speakers at the 2009 Singularity Summit. And then I went back to avoiding any contact with SIAI. Until Adelene Dawner talked to them for me, without me asking her to. Thanks Ade :)
Um… one other thing I just realized… well, actually Adelene Dawner just mentioned it in Wave, where I was writing a draft of this post… the reason why I haven’t been trying to socialize with people other than Singularitarians is… I was afraid that anyone who isn’t a Singularitarian would just write off my fanaticism as general insanity, and therefore any attempt to socialize with non-Singularitarians would just end up making the Singularitarian movement look bad… I already wrote about how this is a bad habit I carried with me from christianity. It’s strange that I hadn’t actually spent much time thinking about this, I just somehow wrote it off as not an option, to try to socialize with non-Singularitarians, and ended up just not thinking about it after that. I still made a few careful attempts at socializing with non-Singularitarians, but the results of these experiments only confirmed my suspicions.
Oh, and another thing I just realized: Confirmation Bias. These experiments were mostly invalid, because they were set up to detect confirming evidence of my suspicions, but not set up to be able to falsify them. oops. I made the same mistake with my suspicions that normal people wouldn’t be able to accept my fanatical Singularitarianism, my suspicions that the other Singularitarians are all so totally focused on the mission that they don’t have any time at all for socializing, and also my suspicions that my parents wouldn’t be able to accept my atheism. yeah, um, oops. So I guess it would be really silly of me to continue blaming this situation on other people. Yes, it may have been theoretically possible for someone else to notice and fix these problems, but I was deliberately taking actions that ended up preventing them from having a chance to do so.
There’s probably more I could say, but I’ll stop writing now.
um… after reviewing this comment, I realize that the stuff I wrote here doesn’t actually count as evidence that I don’t have True Believer Syndrome. Or at least not conclusive evidence.
oh, and did I mention yet that I also seem to have some form of Saviour Complex? Of course I don’t actually believe that I’m saving the world through my own actions, but I seem to be assigning at least some probability that my actions may end up making the difference between whether our efforts to achieve a positive Singularity succeed or fail.
but… if I didn’t believe this, then I wouldn’t bother donating, would I?
Do other people manage to believe that their actions might result in making the difference between whether the world is saved or not, without it becoming a Saviour Complex?
PeerInfinity, I don’t know you personally and can’t tell whether you have True Believer Syndrome. I’m very sorry for provoking so many painful thoughts… Still. Hoffer claims that the syndrome stems from lack of self-esteem. Judging from what you wrote, I’d advise you to value yourself more for yourself, not only for the faraway goals that you may someday help fulfill.
no need to apologise, and thanks for pointing out this potential problem.
(random trivia: I misread your comment three times, thinking it said “I know you personally can’t tell whether you have True Believe Syndrome”)
as for the painful thoughts… It was a relief to finally get them written down, and posted, and sanity-checked. I made a couple attempts before to write this stuff down, but it sounded way too angry, and I didn’t dare post it. And it turns out that the problem was mostly my fault after all.
oh, and yeah, I am already well aware that I have dangerously low self-esteem. but if I try to ignore these faraway goals, then I have trouble seeing myself as anything more valuable than “just another person”. Actually I often have trouble even recognizing that I qualify as a person...
also, an obvious question: are we sure that True Believer Syndrome is a bad thing? or that a Saviour Complex is a bad thing?
random trivia: now that I’ve been using the City of Lights technique for so long, I have trouble remembering not to use a plural first-person pronoun when I’m talking about introspective stuff… I caught myself doing that again as I checked over this comment.
Several comments above you wrote that both Christianity and Singularitarianism drained you of the resources you could’ve spent on having fun. As far as I can understand, neither ideology gave you anything back.
At first I misread what you said and was about to reply with this paragraph:
oh. that’s mostly because I was Doing It Wrong. I was pushing myself harder than I could actually sustain in the long term, and that ended up being counterproductive to singularitarianism. ( and also counterproductive to fun, though I still don’t consider fun to be of any significant inherent value, compared to the value of the mission)
But then I noticed that when I read your comment, I was automatically adding the words “and this would be bad for the mission”, which probably isn’t what you meant.
and I might as well admit that as I was thinking about what else to say in reply, everything I thought of was phrased in terms of what mattered to singularitarianism. I was going to resist the suggestion that I should be paying any attention to what the ideology could give back. I was going to resist the suggestion that fun had any use other than helping me stay focused on the mission, if used in moderation.
And I’m still undecided about whether this reaction is a bad thing, because I’m still measuring good and bad according to singularitarian values, not according to selfish values. And I would still resist any attempt to change my values to anything that might conflict with singularitarianism, even in a small way.
ugh… even if everyone from SIAI told me to stop taking this so seriously, I would probably still resist. And I might even consider this as a reason to doubt how seriously they are taking the mission.
ok, so I guess it would be silly of me to claim that I don’t have a true believer’s complex, or a saviour complex, or just fanaticism in general.
though I still need to taboo the word “fanaticism”… I’m still undecided about whether I’m using it as if it means “so sincerely dedicated that the dedication is counterproductive”, or “so sincerely dedicated that anyone who hasn’t tried to hack their own mind into being completely selfless would say that I’m taking this way too far”.
By the first definition, I would of course consider my fanaticism to be counterproductive and harmful. But I would naturally treat the second definition as an example of other people not taking the mission seriously enough.
And now I’m worrying that all this stuff I’m saying is actually not true, and is really just an attempt to signal how serious and dedicated I am to the mission. Actually, yeah, I would be really surprised if there wasn’t any empty signalling going on, and if the signalling wasn’t causing my explanations to be inaccurate.
In other news, I’m really tired at the moment, but I’m pushing myself to type this anyway, because it feels really important and urgent.
I think there was more I wanted to say, but whatever it was, I forget it now, and this comment is already long, and I’m tired, so I’ll stop writing for now.
also, an obvious question: are we sure that True Believer Syndrome is a bad thing?
Say it was the case that promoting a singularity was a bad idea and that, in particular, SIAI did more harm than good. If someone had compelling evidence of this and presented it to you would you be capable of altering your beliefs and behavior in accordance with this new data? I take it the True Believer would not and that we can all agree with would be a bad thing.
ah, but Singularitarianism is different: a True Singularitarian is supposed to be able to update on this evidence, even if it means abandoning SIAI entirely.
Presented with evidence of the counterproductivity of SIAI, a True Singularitarian would then try to find a better way to help the efforts to achieving a positive Singularity, even if it meant creating an entirely new group for this purpose.
Note that “Singularitarian” is not the same as “SIAI Supporter”, or “Eliezer Follower”
actually, I think the same applies to a True Christian. If a True Christian finds out that the church isn’t doing its job properly, and the church refuses to correct what’s wrong, then the True Christian is supposed to start their own church. And this actually happened many times through history...
Maybe instead of imagining your actions as having some probability of ‘making the difference,’ try thinking of them as slightly boosting the probability of a positive singularity?
At any rate, the survival of someone wheeled in through the doors of a hospital might depend on the EMTs, the nurses, the surgeons, the lab techs, the pharmacists, the janitors and so on and so on. I’d say they’re all entitled to take a little credit without being accused of having a savior complex!
um… can you please explain what the difference is, between “having some probability X of making the difference between success and failure, of achieving a positive Singularity” and “boosting the probability of a positive Singularity, by some amount Y”? To me, these two statements seem logically equivalent. Though I guess they focus on different details...
oh, I just noticed one obvious difference: X is not equal to Y
Yeah, what I wrote was intended as an alternative way of thinking about the situation that might make you feel better, rather than an accusation of wrongness.
I guess I’ll still need to think about this some more...
some random observations:
if X > 0, then Y > 0
if Y > 0, then X > 0
I was about to question whether maybe X = Y after all, but further thought reveals that X isn’t clearly defined, and I really would be better off focusing on Y, because Y is more clearly defined than X, and thinking about Y seems to trigger less panic than thinking about X.
So, yeah, thanks again for your comment. It was helpful. :)
Yes, it may have been theoretically possible for someone else to notice and fix these problems, but I was deliberately taking actions that ended up preventing them from having a chance to do so.
Nitpick for clarity’s sake: I’ve seen no evidence that this was deliberate in the sense implied, and I would expect to have seen such evidence if it did exist. It may have been deliberate or quasi-deliberate for some other reason, such as social anxiety (which I have seen evidence of).
er, yes, that’s what I meant. sorry for the confusion. I wasn’t deliberately trying to prevent anyone from helping, I was deliberately trying to avoid wasting their time, by having no contact with them, which prevented them from being able to help.
I’ve heard from an ex-fundamentalist that for some people, conversion is a high in itself (I don’t know if this is mostly true for Christians, or applies to movements in general. In any case, he said the high lasts for about two years, and then wears off, so that those people then convert to something else.
Huh. I knew this was true of me, but didn’t realize it was common. I went from being an extreme Christian at 11 to an extreme utilitarian by about 14 (despite not knowing people who were extreme about either thing).
PeerInfinity, I’m rather struck by a number of similarities between us:
I, too, am a programmer making money and trying to live frugally in order to donate to high-expected-value projects, currently SIAI.
I share your skepticism about the cause and am not uncomfortable with your 1% probability of positive Singularity. I agree SIAI is a good option from an expected-value perspective even if the mainline-probability scenario is that these concerns won’t materialize.
As you might guess from my user name, I’m also a Utilitronium-supporting hedonistic utilitarian who is somewhat alarmed by Eliezer’s change of values but who feels that SIAI’s values are sufficiently similar to mine that it would be unwise to attempt an alternative friendly-AI organization.
I share the seriousness with which you regard Pascal’s wager, although in my case, I was pushed toward religion from atheism rather than the other way around, and I resisted Christian thinking the whole time I tried to subscribe to it. I think we largely agree in our current opinions on the subject. I do sometimes have dreams about going to the Christian hell, though.
I’m not sure if you share my focus on animal suffering (since animals outnumber current humans by orders of magnitude) or my concerns about the implications of CEV for wild-animal suffering. Because of these concerns, I think a serious alternative to SIAI in cost-effectiveness is to donate toward promoting good memes like concern about wild animals (possibly including insects) so that, should positive Singularity occur, our descendants will do the right sorts of things according to our values.
And, um… I used to have some really nasty nightmares about going to the christian hell. But then, surprisingly, these nightmares somehow got replaced with nightmares of a hell caused by an Evil AI. And then these nightmares somehow got replaced with nightmares about the other hells that modal realism says must already exist in other universes.
I totally agree with you that the suffering of humans is massively outweighed by the suffering of other animals, and possibly insects, by a few orders of magnitude, I forget how many exactly, but I think it was less than 10 orders of magnitude. But I also believe that the amount of positive utility that could be achieved through a positive Singularity is… I think it was about 35 orders of magnitude more than all of the positive or negative utility that has been experienced so far in the entire history of Earth. But I don’t remember the details of the math. For a few years now I was planning to write about that, but somehow never got around to it. Well, actually, I did make one feeble attempt to do the math, but that post didn’t actually make any attempt to estimate how many orders of magnitude were involved
Oh, and I totally share your concerns about the possible implications of CEV. Specifically, that it might end up generating so much negative utility that it outweighs the positive utility, which would mean that a universe completely empty of life would be preferable.
Oh, and I know one other person who shares your belief that promoting good memes like concern about wild animals would be more cost effective than donating to Friendly AI research. He goes by the name MetaFire Horsley in Second Life, and by the name MetaHorse in Google Wave. I have spent lots of time discussing this exact topic with him. I agree that spreading good memes is totally a good idea, but I remain skeptical about how much leverage we could get out of this plan, and I suspect that donating to Friendly AI research would be a lot more leveraged. But it’s still totally a good idea to spread positive memes in your spare time, whenever you’re in a situation that gives you an opportunity to do some positive meme spreading. MetaHorse is currently working on some sci-fi stories that he hopes will be useful for spreading these positive memes. He writes these stories in Google Wave, which means that you can see him writing the stories in real-time, and give instant feedback. I really think it would be a good idea for you to get in contact with him. If you don’t already have a Google Wave account, please send me your gmail address in a private email, and I’ll send you a Wave invite.
Oh, and I’m still really confused about how CEV is supposed to work. It seems like it’s supposed to take into our account our beliefs that the suffering of animals, or any sentient creatures, is unacceptable, and consider that as a source of decoherence if someone else advocates an action that would result in suffering. And apparently it’s not supposed to just average out everyone’s preferences, it’s supposed to… I don’t know what, exactly, but it’s supposed to have the same or better results than if we spent lots and lots of time talking with the people who would advocate suffering, and we all learned more, were smarter, and “grew up further together”, whatever that means, and other stuff. And that sounds nice in theory, but I’m still waiting for a more detailed specification. It’s been a few years since the original CEV document was published, and there haven’t been any updates at all. Well, other than Eliezer’s posts to LW.
Oh, and I read all of your essays (yes, all of them, though I only skimmed that really huge one that listed lots of numbers for the amount of suffering of animals) a few months ago, and we chatted about them briefly. Though that was long enough ago that it would probably be a good idea for me to review them.
Anyway, um… keep up the good work, I guess, and thanks for the feedback. :)
Bostrom’s estimate in “Astronomical Waste” is “10^38 human lives [...] lost every century that colonization of our local supercluster is delayed,” given various assumptions. Of course, there’s reason to be skeptical of such numbers at face value, in view of anthropic considerations, simulation-argument scenarios, etc., but I agree that this consideration probably still matters a lot in the final calculation.
Still, I’m concerned not just with wild-animal suffering on earth but throughout the cosmos. In particular, I fear that post-humans might actually increase the spread of wild-animal suffering through directed panspermia or lab-universe creation or various other means. The point of spreading the meme that wild-animal suffering matters and that “pristine wilderness” is not sacred would largely be to ensure that our post-human descendants place high ethical weight on the suffering that they might create by doing such things. (By comparison, environmental preservationists and physicists today never give a second thought to how many painful experiences are or would be caused by their actions.)
As far as CEV, the set of minds whose volitions are extrapolated clearly does make a difference. The space of ethical positions includes those who care deeply about sorting pebbles into correct heaps, as well as minds whose overriding ethical goal is to create as much suffering as possible. It’s not enough to “be smarter” and “more the people we wished we were”; the fundamental beliefs that you start with also matter. Some claim that all human volitions will converge (unlike, say, the volitions of humans and the volitions of suffering-maximizers); I’m curious to see an argument for this.
Who are you thinking of? (Eliezer is frequently accused of this, but has disclaimed it. Note the distinction between total convergence, and sufficient coherence for an FAI to act on.)
(edit: The version of utilitarianism I’m talking about in this comment is total hedonic utilitarianism. Maximize the total amount of pleasure, minimize the total amount of pain, and don’t bother keeping track of which entity experiences the pleasure or pain. A utilitronium shockwave scenario based on preference utilitarianism, and without any ethical restrictions, is something that even I would find very disturbing.)
I totally agree!!!
Astronomical waste is bad! (or at least, severely suboptimal)
Wild-animal suffering is bad! (no, there is nothing “sacred” or “beautiful” about it. Well, ok, you could probably find something about it that triggers emotions of sacredness or beauty, but in my opinion the actual suffering massively outweighs any value these emotions could have.)
Panspermia is bad! (or at least, severely suboptimal. Why not skip all the evolution and suffering and just create the end result you wanted? No, “This way is more fun”, or “This way would generate a wider variety of possible outcomes” are not acceptable answers, at least not according to utilitarianism.)
Lab-universes have great potential for bad (or good), and must be created with extreme caution, if at all!
Environmental preservationists… er, no, I won’t try to make any fully general accusations about them. But if they succeed in preserving the environment in its current state, that would involve massive amounts of suffering, which would be bad!
I also agree with your concerns about CEV.
Though of course we’re talking about all this as if there is some objective validity to Utilitarianism, and as Eliezer explained: (warning! the following sentence is almost certainly a misinterpretation!) You can’t explain Utilitarianism to a rock, therefore Utilitarianism is not objectively valid.
Or, more accurately, our belief in utilitarianism is a fact about ourselves, not a fact about the universe. Well, indirectly it’s a fact about the universe, because these beliefs were generated by a process that involves observing the universe. We observe that pleasure really does feel good, and that pain really does feel bad, and therefore we want to maximize pleasure and minimize pain. But not everyone agrees with us. Eliezer himself doesn’t even agree with us anymore, even though some of his previous writing implied that he did before. (I still can’t get over the idea that he would consider it a good idea to kill a whole planet just to PREVENT an alien species from removing the human ability to feel pain, and a few other minor aesthetic preferences. Yeah, I’m so totally over any desire to treat Eliezer as an Ultimate Source of Wisdom...)
Anyway, CEV is supposed to somehow take all of these details into account, and somehow generate an outcome that everyone will be satisfied with. I still don’t see how this could be possible, but maybe that’s just a result of my own ignorance. And then there’s the extreme difficulty of actually implementing CEV...
And no, I still don’t claim to have a better plan. And I’m not at all comfortable with advocating the creation of a purely Utilitarian AI.
Your plan of trying to spead good memes before the CEV extrapolates everyone’s volition really does feel like a good idea, but I still suspect that if it really is such a good idea, then it should somehow be a part of the CEV extrapolation. I suspect that if you can’t incorporate this process into CEV somehow, then any other possible strategy must involve cheating somehow.
Oh, I had another conversation recently on the topic of whether it’s possible to convince a rational agent to change its core values through rational discusson alone. I may be misinterpreting this, but I think the conversation was inconclusive. The other person believed that… er, wait, I think we actually agreed on the conclusion, but didn’t notice at the time. The conclusion was that if an agent’s core values are inconsistent, then rational discussion can cause the agent to resolve this inconsistency. But if two agents have different core values, and neither agent has internally inconsistent core values, then neither agent can convince the other, without cheating. There’s also the option of trading utilons with the other agent, but that’s not the same as changing the other agent’s values.
Anyway, I would hope that anyone who disagrees with utilitarianism, only disagrees because of an inconsistency in their value system, and that resolving this inconsistency would leave them with utilitarianism as their value system. But I’m estimating the probability that this is the case at… significantly less than 50%. Not because I have any specific evidence about this, but as a result of applying the Pessimistic Prior. (Is that a standard term?)
Anyway, if this is the case, then the CEV algorithm will end up resulting in the outcome that you wanted. Specifically, an end to all suffering, and some form of utilitronium shockwave.
Oh, and I should point out that the utilitronium shockwave doesn’t actually require the murder of everyone now living. Surely even us hardcore utilitarians should be able to afford to leave one planet’s worth of computronium for the people now living. Or one solar system’s worth. Or one galaxy’s worth. It’s a big universe, after all.
Oh, and if it turns out that some people’s value systems would make them terribly unsatisfied to live without the ability to feel pain, or with any of the other brain modifications that a utilitarian might recommend… then maybe we could even afford to leave their brains unmodified. Just so long as they don’t force any other minds to experience pain. Though the ethics of who is allowed to create new minds, and what sorts of new minds they’re allowed to create… is kinda complicated and controversial.
Actually, the above paragraph assumed that everyone now living would want to upload their minds into computronium. That assumption was way too optimistic. A significant percentage of the world’s population is likely to want to remain in a physical body. This would require us to leave this planet mostly intact. Yes, it would be a terribly inefficient use of matter, from a utilitarian perspective, but it’s a big universe. We can afford to leave this planet to the people who want to remain in a physical body. We can even afford to give them a few other planets too, if they really want. It’s a big universe, plenty of room for everyone. Just so long as they don’t force any other mind to suffer.
Oh, and maybe there should also be rules against creating a mind that’s forced to be wireheaded. There will be some complex and controversial issues involved in the design of the optimally efficient form of utilitronium that doesn’t involve any ethical violations. One strategy that might work is a cross between the utilitronium scenario and the Solipsist Nation scenario. That is, anyone who wants to retreat entirely into solipsism, let them do their own experiments with what experiences generate the most utility. There’s no need to fill the whole universe with boring, uniform bricks of utilitronium that contain minds that consist entirely of an extremely simple pleasure center, endlessly repeating the same optimally pleasurable experience. After all, what if you missed something when you originally designed the utilitronium that you were planning to fill the universe with? What if you were wrong about what sorts of experiences generate the most utility? You would need to allocate at least some resources to researching new forms of utilitronium, why not let actual people do the research? And why not let them do the research on their own minds?
I’ve been thinking about these concepts for a long time now. And this scenario is really fun for a solipsist utilitarian like me to fantasize about. These concepts have even found their way into my dreams. One of these dreams was even long, interesting, and detailed enough to make into a short story. Too bad I’m no good at writing. Actually, that story I just linked to is an example of this scenario going bad...
Anyway, these are just my thoughts on these topics. I have spent lots of time thinking about them, but I’m still not confident enough about this scenario to advocate it too seriously.
Yes, to various extents. (I should have been more helpful in the grandparent comment.)
I think the main problem is you seem to have a “stream of consciousness” style of writing. If you add an additional step of editing after (I’m just assuming you’re not doing much of this now), then you can figure out which points are most important to make and put them succinctly.
The advantage of this, from a utilitarian point of view, is that you can spend less time editing than it will take any particular person to otherwise figure out what you’re trying to say, and thus cause a net benefit to lots of people.
(ETA: note that the great-grandparent comment seems less subject to this particular criticism than some others)
As I was writing the following points, I noticed that I was just making excuses. But instead of deleting them, I left them in, but commented on them, because they felt important and relevant.
I was already aware of the utilitarian argument that it’s worth 1 minute of effort at rewriting in order to save 60 people one second each at reading, and I am making at least some attempt to do that. (correction: no, I didn’t actually do the math. I should at least try to do the math.)
I already spend lots of time reviewing my comments before I post them. I don’t post them until I scan through them once without noticing anything wrong. (correction: no, lately I’ve been posting them before I complete a full scan without finding any new issues, and I’ve been fixing some things by editing the comments after posting them. I should be more strict about following this rule. and as I mention below, I should add new issues to the list of things to scan for.)
Normally I have the opposite problem, spending way too much time reviewing what I wrote, which ends up resulting in other important things not getting said, because I’m spending too much time reviewing and never get around to writing the next thing. (correction: this will probably become less of an issue now that I’ve finished writing all of these “about me” comments.)
It usually feels like there’s a sense of urgency, that if I take too long to write a reply, then everyone will have moved on to other topics, and noone will end up reading my comment. (correction: sometimes there is a reason to post stuff asap, other times there isn’t. I need to learn how to tell the difference.)
But these are just excuses. If I’m going to continue posting comments, then I had better learn how to improve the quality of my comments.
The stream-of-consciousness style comments were something I wanted feedback on, and now I got the feedback, thanks. The feedback says that stream-of-consciousness-style comments are not acceptable. I’ll try to stop doing that.
And that means that in addition to the issues I’m already scanning for, I’ll also scan for… the specific reasons why stream-of-consciousness-style writing is annoying to read:
I need to present the points in the order that would make the most sense to the reader, not in just whatever order I happen to think of them in.
I need to erase points that I discover make no sense, rather than leaving them in just because it feels like there may be some reason to document the mistake.
I need to cut out off-topic side-comments entirely
I need to stop using phrases like “oh, by the way”
I need to cut out any meta-comments from inside my comments, unless for some reason they really are necessary
I especially need to cut out any comments about things like “my brain’s excuse-generator”. I need to remove the offending text, rather than explaining what caused me to write it. Unless it happens to be specifically on-topic, like in this comment.
probably some more things I haven’t thought of.
But so far that just answers what to do about the stream-of-consciousness-style writing. It doesn’t answer what to do about the excessive length of the comments. This comment is also really long, but I’m posting it anyway, because it feels necessary.
Actually, I should ask what everyone else does. Or maybe I should ask just what you, in particular do, Thom. Though this is already far off the original post’s topic...
The “excuse generator” points at something I suspect is a very fast and active part of a lot of people’s minds, but it’s probably worth a post or at least an extended open thread comment of its own.
As far as I can tell, I write so as to make things clear to the state of mind I was in just before I thought of something I’m trying to get across.
Thanks for the feedback, that last sentence sounds like a good idea, I’ll go ahead and try it.
There probably have already been lots posts about the “excuse generator”, though not specifically by that name. For example, Eliezer’s post Against Devil’s Advocacy Though that’s not quite the same thing.
Environmental preservationists… er, no, I won’t try to make any fully general accusations about them. But if they succeed in preserving the environment in its current state, that would involve massive amounts of suffering, which would be bad!
Indeed. It may be rare among the LW community, but a number of people actually have a strong intuition that humans ought to preserve nature as it is, without interference, even if that means preserving suffering. As one example, Ned Hettinger wrote the following in his 1994 article, “Bambi Lovers versus Tree Huggers: A Critique of Rolston”s Environmental Ethics”: “Respecting nature means respecting the ways in which nature trades values, and such respect includes painful killings for the purpose of life support.”
Or, more accurately, our belief in utilitarianism is a fact about ourselves, not a fact about the universe.
Indeed. Like many others here, I subscribe to emotivism as well as utilitarianism.
Anyway, CEV is supposed to somehow take all of these details into account, and somehow generate an outcome that everyone will be satisfied with.
Yes, that’s the ideal. But the planning fallacy tells us how much harder it is to make things work in practice than to imagine how they should work. Actually implementing CEV requires work, not magic, and that’s precisely why we’re having this conversation, as well as why SIAI’s research is so important. :)
but I still suspect that if it really is such a good idea, then it should somehow be a part of the CEV extrapolation.
I hope so. Of course, it’s not as though the only two possibilities are “CEV” or “extinction.” There are lots of third possibilities for how the power politics of the future will play out (indeed, CEV seems exceedingly quixotic by comparison with many other political “realist” scenarios I can imagine), and having a broader base of memetic support is an important component of succeeding in those political battles. More wild-animal supporters also means more people with economic and intellectual clout.
I would hope that anyone who disagrees with utilitarianism, only disagrees because of an inconsistency in their value system, and that resolving this inconsistency would leave them with utilitarianism as their value system. But I’m estimating the probability that this is the case at… significantly less than 50%.
If you include paperclippers or suffering-maximizers in your definition of “anyone,” then I’d put the probability close to 0%. If “anyone” just includes humans, I’d still put it less than, say, 10^-3.
Just so long as they don’t force any other minds to experience pain.
Yeah, although if we take the perspective that individuals are different people over time (a “person” is just an observer-moment, not the entire set of observer-moments of an organism), then any choice at one instant for pain in another instant amounts to “forcing someone” to feel pain....
Like many others here, I subscribe to emotivism as well as utilitarianism.
That is inconsistent. Utilitarianism has to assume there’s a fact about the good; otherwise, what are you maximizing? Emotivism insists that there is not a fact about the good. For example, for an emotivist, “You should not have stolen the bread.” expresses the exact same factual content as “You stole the bread.” (On this view, presumably, indicating “mere disapproval” doesn’t count as factual information).
Sure. Then what I meant was that I’m an emotivist with a strong desire to see suffering reduced and pleasure increased in the manner that a utilitarian would advocate, and I feel a deep impulse to do what I can to help make that happen. I don’t think utilitarianism is “true” (I don’t know what that could possibly mean), but I want to see it carried out.
Indeed. Like many others here, I subscribe to emotivism as well as utilitarianism.
checking out the wikipedia article… hmm… I think I agree with emotivism too, to some degree. I already have a habit of saying “but that’s just my opinion”, and being uncertain enough about the validity (validity according to what?) of my preferences, to not dare to enforce them if other people disagree. And emotivism seems like a formalization of the “but that’s just my opinion”. That could be useful.
Yes, that’s the ideal. But the planning fallacy tells us how much harder it is to make things work in practice than to imagine how they should work. Actually implementing CEV requires work, not magic, and that’s precisely why we’re having this conversation, as well as why SIAI’s research is so important. :)
good point. and yeah, that’s that’s one of the main issues that’s causing me to doubt whether SIAI has any hope of achieving their mission.
I hope so. Of course, it’s not as though the only two possibilities are “CEV” or “extinction.” There are lots of third possibilities for how the power politics of the future will play out (indeed, CEV seems exceedingly quixotic by comparison with many other political “realist” scenarios I can imagine), and having a broader base of memetic support is an important component of succeeding in those political battles. More wild-animal supporters also means more people with economic and intellectual clout.
good point. Have you had any contact with Metafire yet? He strongly agrees with you on this. Just recently he started posting to LW.
oh, and “quixotic”, that’s the word I was looking for, thanks :)
If you include paperclippers or suffering-maximizers in your definition of “anyone,” then I’d put the probability close to 0%. If “anyone” just includes humans, I’d still put it less than, say, 10^-3.
heh, yeah, that “significantly less than 50%” was actually meant as an extremely sarcastic understatement. I need to learn how to express stuff like this more clearly.
Yeah, although if we take the perspective that individuals are different people over time (a “person” is just an observer-moment, not the entire set of observer-moments of an organism), then any choice at one instant for pain in another instant amounts to “forcing someone” to feel pain....
good point! This suggests the possibility of requiring people to go through regular mental health checkups after the Singularity. Preferably as unobtrusively as possible. Giving them a chance to release themselves from any restrictions they tried to place on their future selves. Though the question of what qualifies as “mentally healthy” is… complex and controversial.
When discussing utilitarianism it is important to indicate whether you’re talking about preference utilitarianism or hedonistic utilitarianism, especially in this context.
Right, sorry. I’m referring to total hedonic utilitarianism. Maximize the total amount of pleasure, minimize the total amount of pain, and don’t bother keeping track of which entity experiences the pleasure or pain.
A utilitronium shockwave scenario based on preference utilitarianism, and without any ethical restrictions, is something that even I would find very disturbing.
Indeed. While still a bit muddled on the matter, I lean toward hedonistic utilitarianism, at least in the sense that the only preferences I care about are preferences regarding one’s own emotions, rather than arbitrary external events.
Actually, the above paragraph assumed that everyone now living would want to upload their minds into computronium. That assumption was way too optimistic. A significant percentage of the world’s population is likely to want to remain in a physical body. This would require us to leave this planet mostly intact. Yes, it would be a terribly inefficient use of matter, from a utilitarian perspective, but it’s a big universe. We can afford to leave this planet to the people who want to remain in a physical body. We can even afford to give them a few other planets too, if they really want. It’s a big universe, plenty of room for everyone. Just so long as they don’t force any other mind to suffer.
You could also almost certainly convert a considerable percentage of the planet’s mass to computronium without impacting the planet’s ability to support life. A planet isn’t a very mass-efficient habitat, and I doubt many people would even notice if most of the core was removed, provided it was replaced with something structurally and electrodynamically equivalent.
If computronium is of density equal to or greater than iron, physics wouldn’t need to be changed. Remove the core, replace it with a roughly spherical wad of perfected brain-matter, plus whatever structural supports are necessary to keep the crust in place, and Newton’s Shell Theorem says gravity would be the same. Add some electromagnets for the poles, and channel waste heat from the mechanisms inside to simulate volcanism where appropriate.
Even if computronium turns out to have lower density than iron, and for whatever reason it’s unacceptable to reduce surface gravity or transplant the luddites to an otherwise earthlike planet of correspondingly greater diameter, some of the core’s mass could be converted and the remainder compressed into a black hole. Again, shell theorem means there’s no difference from the outside.
That-guy-who-isn’t-all-that-smart-but-likes-to-sound-smart-by-quoting-really-smart-people was quoting Eliezer Yudkowsky. Almost immediately after that conversation, I googled the things he was talking about. I discovered Singularitarianism.
I could not tell from your post if you understood that Pascal’s Wager is a flawed argument for believing in ANY belief system. You do understand this don’t you (That Pascal’s Wager is horribly flawed as an argument for believing in anything)?
Also, as Counsin it seems to be implying (And I would suspect as well), you seem to be exhibiting signs of the True Believer complex.
This is what I alluded to when I discussed friends of mine who would swing back and forth between Born-Again Christian and Satanists. Don’t make the same mistake with a belief in the Singularity. One needn’t have “Faith” in the Singularity as one would God in a religious setting, as there are clear and predictable signs that a Singularity is possible (highly possible), yet there exists NO SUCH EVIDENCE for any supernatural God figure.
Forming beliefs is about evidence, not about blindly following something due to a feel good that one gets from a belief.
In chapter five of Jaynes, “Queer Uses for Probability Theory,” he explains that although a claimed telepath tested 25.8 standard deviations away from chance guessing, that isn’t the probability we should assign to the hypothesis that she’s actually a telepath, because there are many simpler hypotheses that fit the data (for instance, various forms of cheating).
This example is instructive when using Pascal’s Wager to minimax expected utility. Pascal’s Wager is a losing bet for a Christian, because even though expecting positive infinity utility with infitesimal probability seems like a good bet, there are many likelier ways of getting negative infinity utility from that choice. Doing what you can to promote a friendly singularity can still be called “Pascal’s Wager” because it’s betting on a very good outcome with a low probability, but the low probability is so many orders of magnitude better than Christianity’s that it’s actually a rather good bet.
Obviously, you don’t want to let wishful thinking guide your epistemology, but I don’t think that’s what PI’s talking about.
Pascal’s wager is not such a horribly flawed argument. In fact, I wager we can’t even agree on why its flawed.
Later edit: I assume I am getting voted down for trolling (that is, disrupting the flow of conversation), and I agree with that. An argument about Pascal’s wager is not really relevant in this thread. However, especially in the context of being a ‘true believer’, it is interesting to me that statements are often made that something is ‘obvious’, when there are many difficult steps in the argument, or ‘horrible flawed’, when it’s actually just a little bit flawed or even controversially flawed. If anyone wants to comment in a thread dedicated to Pascal’s wager, we can move this to the open thread, which I hope ultimately makes this comment less trollish of me.
Partially seconded. (I think most people agree that the primary flaw is the symmetry argument, but I don’t think that argument does what they think it does, and I do see people holding up other, minority flaws. I do think the classic wager is horribly flawed for other, related but less commonly mentioned, reasons.)
Thanks for the link to the Overcoming Bias post. I read that and it clarified some things for me. If I had known about that post, above I would have just linked to it when I wrote that the fallacy behind Pascal’s wager is probably actually unclear, minor or controversial.
There aren’t many difficult steps in refuting Pascal’s wager, and I dont’ think there’s be much disagreement on it here.
The refutation of PW, in short, is this: it infers high utility based on a very complex (and thus highly-penalized) hypothesis, when you can find equally complex (and equally well-supported) hypotheses that imply the opposite (or worse) utility.
Again, is it the argument that is wrong, or Pascal’s application of it?
It is always wrong to give weight to hypotheses beyond that justified by the evidence and the length penality (and your prior, but Pascal attempts to show what you should do irrespective of prior). Pascal’s application is a special case of this error, and his reasoning about possible infinite utility is compounded by the fact that you can construct contradictory advice that is equally well-grounded.
(Can you confirm whether you down-voted me because it’s off-topic and inflammatory, or just because I’m wrong?)
I downvoted you not just for being wrong, but for having made such a bold statement about PW without (it seems) having read the material about it on LW. I also think that such over-reaching trivializes the contribution of writers on the topic and so comes off as inflammatory.
It is always wrong to give weight to hypotheses beyond that justified by the evidence and the length penality (and your prior, but Pascal attempts to show what you should do irrespective of prior).
Are you saying, here, that it is wrong to factor in the utility of the hypothesis when giving weight to the hypothesis?
his reasoning about possible infinite utility is compounded by the fact that you can construct contradictory advice that is equally well-grounded.
If he didn’t consider all the cases, his particular application of the argument was bad, not the argument itself, right?
I downvoted you not just for being wrong, but for having made such a bold statement about PW without (it seems) having read the material about it on LW. I also think that such over-reaching trivializes the contribution of writers on the topic and so comes off as inflammatory.
I have read the material, but I disagreed with it, and it’s often not clear—especially when the posts are old—how I can jump in and chime in that I don’t agree. Often it’s just the subtext I disagree with, so I wait for someone to make it more explicit (or at least more immediate) and then I bring it up.
Thanks for your explanation about the down-voting.
Are you saying, here, that it is wrong to factor in the utility of the hypothesis when giving weight to the hypothesis?
No (assuming you mean the expected utility of the action given the hypothesis), just that you have to accurately weight its probability.
If he didn’t consider all the cases, his particular application of the argument was bad, not the argument itself, right?
But his argument wouldn’t somehow be improved by considering all the cases (not that it would be practical to even consider all the hypotheses of lengths up to that which implies high utility from faith in God!). Considering those cases would find hypotheses that assign the opposite utility to faith, and worse, some would be more probable.
To salvage the argument, one would have to not just consider more cases, but provide a lot more epistemic labor—that is, make arguments that aren’t part of PW to begin with.
All of your objections to PW seem to be about Pascal’s application of the argument (the probabilities he inputted, the number of cases cases he considered) in which case we can agree that his conclusion wouldn’t be correct.
When I read that Pascal’s Wager is flawed as an argument, I interpret this as ‘the argument does not have good form’. Did people just mean, all along, that they disagreed with the conclusion of the argument because they didn’t agree with the numbers he used?
I think what they mean is, “If an argument allows you to claim an unreasonably huge amount of utility from actions not seemingly capable of that, then you have a complex enough hypothesis that you can find others with the same complexity and opposite conclusion”.
PW-type arguments, then, refer to the class of arguments in which someone tries to justify a course of action through (following the action suggested by) an improbable hypothesis by claiming high enough expected utility. That class of arguments has the flaw that when you allow yourself that much complexity, you necessarily permit hypotheses that advise just as strongly against the action.
That is not something that you can salvage by using different numbers here and there, and so the argument and similar ones have bad (and unsalvageable) form.
“If an argument allows you to claim an unreasonably huge amount of utility from actions not seemingly capable of that, then you have a complex enough hypothesis that you can find others with the same complexity and opposite conclusion”.
That is still fine, because we know how to handle the hypotheses with negative utility. You just optimize over the net utilities of each belief weighted by their probabilities.The fact that there are positive and negative terms together doesn’t invalidate the whole argument. You just do the calculation, if you can, and see what you get.
That is not something that you can salvage by using different numbers here and there, and so the argument and similar ones have bad (and unsalvageable) form.
If you have the right numbers, and a simple enough case to do the computation, would you find PW an acceptable argument?
I’m still having trouble understanding your objection.
When you decide to have faith based on PW, you’re using some epistemology that allows you to pick out the “faith causes infinite utility” hypothesis out of the universe-generating functionspace, and deem it to have some finite probability. The problem is that that epistemology—whatever it is—also allows you to pick out numerous other hypotheses, in which some assert the opposite utility from faith (and their existence is provable by inversion of the faith = utility hypothesis elements).
In order to show net positive utility from believing, you would have to find some way of counting all hypotheses this complex, and finding out which comes ahead. However, the canonical PW argument relies on such anti-faith hypotheses not existing. You would be treading new ground in finding some efficient way to count up all such hypotheses and find which action comes out ahead—keeping in mind, of course, that at this level of complexity, there is a HUGE number of hypotheses to consider.
So you would be making a new argument, only loosely related to canonical PW. If you think you can pull this off, then go ahead and write the article, though I think you’ll soon find it’s not as easy as you expect.
And I would submit that any hypothesis that allows you to claim something has infinite utility (or necessarily more utility than the result of any other action) must itself be infinitely complex, thus infinitely improbable, canceling out the infinity claimed to come from faith.
As you know, I think the essence of Pascal’s wager is this:
If believing in X has positive utility, then you should believe in X.
I think there is enough to debate about in that statement alone.
But suppose that X = God exists. It seems to me that you are consistently writing that Pascal’s Wager fails because in this case the utility of X is impossible to compute due to the complexity of X. I don’t believe this makes the argument fail for two reasons:
Pascal’s Wager says, “If belief in X has positive utility, you should believe in X’. This argument doesn’t fail (in form) if the utility is negative or impossible to compute.
I disagree that the utility is impossible to compute, despite all your arguments about the complexity of X. My reason is straight-forward: atheists do calculate (or at least estimate) the utility of believing in God. Usually, they come up with a value that is negative. So it’s not impossible to estimate the average utility of a complex belief.
And I would submit that any hypothesis that allows you to claim something has infinite utility (or necessarily more utility than the result of any other action) must itself be infinitely complex, thus infinitely improbable, canceling out the infinity claimed to come from faith.
That’s not quite valid— there is some finite program that unfolds Permutation City-style into a universe that allows for infinite computational power, and thus (by some utility functions) infinite utility as the consequence of some actions. It would be wrong for a scientist living in such a universe to reject that hypothesis.
The reason I believe Pascal’s wager is flawed is that it is a false dichotomy. It looks at only one high utility impact, low probability scenario, while excluding others that cancel out its effect on expected utility.
Is there anyone who disagrees with this reason, but still believes it is flawed for a different reason?
This is an argument for why the argument doesn’t work for theism, it doesn’t mean the argument itself is flawed. If you would be willing to multiply the utility of each belief times the probability of each belief and proceed in choosing your belief in this way, then that is an acceptance of the general form of the argument.
If you assume that changing your belief is an available action (which is also questionable), then the idealized form is just expected utility maximization. The criticism is that Pascal incorrectly calculated the expected utility.
Right, one flaw in the idealized form is that it’s not clear that you can simply choose the belief that maximizes utility. But in some cases a person can, and does.
I think that an incorrect calculation, because one person considered 2 cases instead of N cases, is very different from being flawed as an argument.
PeerInfinity was writing about applying Pascal’s wager to atheism—so he must have been referring to the general form of the argument, not a particular application. Matthew B wrote that “Pascal’s Wager is a flawed argument for believing in ANY belief system”. Well, what about a belief system in which there are exactly two beliefs to choose from and the relative probabilities are (.4, .6) and the relative utilities of having the beliefs if they are true are (1000, 100) ? I would say the conclusion of the idealized form of Pascal’s wager is that you should pick the belief that maximizes utility, even though it is lower probability.
I would distinguish between the general form and the idealized general form. One way to generalize Pascal’s wager for belief B, is to compare the expected utilities of believing B and believing one contradictory Belief D in the conditions that B is true and that D is true. This is wrong no matter what belief B you apply it to.
The utility of having a belief is what is being considered in Pascal’s wager, and is quite different from the utility of the belief itself.
The utility of a belief itself wouldn’t sway you to choose one belief over another. Suppose againyou have the two beliefs X and Y, and they each have a certain utility if they are true. If X is true, then you “get” that utility, independently of whether you believed it or not, by virtue of it being true. For example, if there is utility to God existing, then there is that benefit of him existing whether you believe in him or not.
In contrast, there is also utility for having a belief.
To complicate things, there is a component of the utility that is independent of whether the belief is true or not, and there is a component of the utility that depends on the belief being true. In the case of theism, there is a utility to being a theist (positive or negative, depending on who you ask) regardless of whether God exists, and there would also be an extra utility for believing in him if he does exist (possibly zero, if he doesn’t care whether you believe in him or not).
You mean the case of the argument applied to theism? I would be willing to forfeit the applicability of the argument for this case, since I’m just interested in discussing the validity of the general argument.
I don’t like discussing general cases when I don’t have some concrete examples. The only ones I can think of are boring cases of coercion involving unethical mindreaders.
Yes, I agree: the utility of having a belief only makes sense when for some reason you are rewarded for actually having the belief instead of acting as though you have the belief.
OK, since theism is unique in this aspect, in order to generalize away from the theistic, let’s use the utility for acting-as-though-you-believe instead of the utility for actually believing, because in most cases, these should be the same.
… but then, as soon as you do this, the argument become just about choosing actions based on average expected utility and there’s nothing controversial about it. So I guess PW might just suffer from lack of application: there are few cases where you are actually differentially rewarded for having a belief (instead of just acting as though you do), and these cases (generalizing from theism) involve hypotheses that are too complex to parametrize (Silas’ argument).
Back to the immediate object level: PeerInfinity wrote about applying Pascal’s Wager to atheism. However, atheism doesn’t make a utility distinction between having a belief and acting as though you do. Or does it? Having beliefs motivate actions and make them easier to compute.
When PeerInfinity said he chose to believe atheism because it seemed to maximize utility, he might have been summarizing together that acting as though atheism was true was deemed utility maximal, and believing in atheism then followed as utility maximal.
I also think Pascal’s Wager is not horribly flawed in the ways it’s most commonly claimed to be, and am aggrieved that this interesting and important discussion is taking place under a downvoted-to-invisibility comment on an unrelated post. I think I’ll write a top-level post about it today or tomorrow, but right now, I’d like to humbly ask that the above comment be upvoted until not invisible.
Suppose there is a dichotomy of beliefs, X and Y, their probabilities are Px and Py, and the utilities of having each belief is Ux and Uy. Then, the average utility of having belief X is PxUx and the utility of having belief Y is Py\Uy. You “should” choose having the belief (or set of beliefs) that maximizes average utility, because having beliefs are actions and you should choose actions that maximize utility.
What is the flaw in this argument?
For me, the flaw that you should identify is that you should choose beliefs that are most likely to be true, rather than those which maximize average utility. But this is a normative argument, rather than a logical flaw in the argument.
Normally, you should keep many competing beliefs with associated levels of belief in them. The mindset of choosing the action with estimated best expected utility doesn’t apply, as actions are mutually exclusive, while mutually contradictory beliefs can be maintained concurrently. Even when you consider which action to carry out, all promising candidates should be kept in mind until moment of execution.
It is also complicated in the case of religious beliefs where other human beings will judge you by your beliefs, which is one reason why abandoning religions is so hard. But that is off-topic, particularly as you can just lie.
While we’re being off topic, I’m of the opinion that if you are someone who accepts you should one-box then you should also accept Pascal’s wager. I think both are wrong but most people here seem to accept one-boxing is correct but not accept Pascal’s wager. I don’t care enough about either to work the argument out in detail though.
Newcomb’s problem is just a case of making decisions when someone else, who “knows you very well” has already made a decision based on expectation of your decision. There are numerous real-world examples of this. Newcomb’s problem only differs in that it takes the limit of the “how well they know you” variable as it approaches “perfect”. There needn’t be an actual Omega, just a decision theory that is robust for all values of the variable up to and including perfect.
Newcomb’s problem is just a case of making decisions when someone else, who “knows you very well” has already made a decision based on expectation of your decision.
Which sounds a lot like Pascal’s wager to me, when your decision is whether to believe in god and god is the person who “knows you very well” and is deciding whether to let you into heaven based on whether you believe in him or not.
There are situations which I guess are what you would describe as ‘Newcomb-like’ where I would do the equivalent of one-boxing. If Omega shows up this evening though I will be taking both his boxes, because there is too big an epistemic gap for me to cross to reach the point of thinking that one-boxing is sensible in this universe.
But the plausibility of a hypothetical is unrelated to the correct resolution of the hypothetical. One could equally say that two-boxing implies that you should push the man off the bridge in the trolley problem—the latter is just as unphysical as Newcomb. The proper objection to unreasonable hypotheticals is to claim that they do not resemble the real-world situations one might compare them to in the relevant aspects.
I actually think that implausible hypotheticals are unhelpful and probably actively harmful which is why I usually don’t involve myself in discussions about Omega. I wish I’d stuck with that policy now.
Why do you think implausible hypotheticals are unhelpful and probaby harmful? It seems to me that they’re a lot of work for no obvious reward, but I don’t have a more complex theory.
Anyone have an example of the examination of an implausible hypothetical paying off?
I think implausible hypotheticals are often intuition pumps. If they are used as part of an attempt to convince the audience of a certain point of view I automatically get suspicious. If the point of view is correct, why can’t it be illustrated with a plausible hypothetical or a real world example? They often seem to be constructed in a way that tries to move attention away from certain aspects of the situation described and thus allow for dubious assumptions to be hidden in plain sight.
Basically, I always feel like someone is trying to pull a philosophical sleight of hand when they pull out an implausible hypothetical to make their case and they often seem to be used in arguments that are wrong in subtle or hard to detect ways. I feel like I encounter them far more in arguments for positions that I ultimately conclude are incorrect than as support for positions I ultimately conclude to be correct.
That’s interesting, and might apply to the trolley problem which implies that people can have much more knowledge of the alternatives than they are ever likely to have.
Ethical principles and empathy (as a sort of unconscious ethical principle) are needed when you don’t have detailed knowledge, but I haven’t seen the trolley problem extended to the usual case of not knowing very many of the effects.
Taking a look at ethical intuitions with specifics: Sex, Drugs, and AIDS: the desire to only help when it will make a big difference and the desire to not help unworthy people add up to worse effects than having a less dramatic view of the world. Having AIDS drugs doesn’t mean it makes sense to slack off on prevention as much as has happened.
Yes, the trolley problems are another example of harmful implausible hypotheticals in my opinion. The different reaction many people have to the same underlying ethical question framed as a trolley problem vs. an organ donor problem is I think illustrative of the pernicious influence of implausible hypotheticals on clear thought.
Well, the fact that they’re implausible pretty much means the cash rewards are going to have to wait until they are plausible. But don’t we think clear thinking is its own reward?
I’ve found that such things are incredibly crucial for getting people to think clearly about personal identity. In fact I don’t know if I have any way of explaining or defending my views on personal identity to the philosophically untrained without implausible hypotheticals. Same goes for understanding skepticism, causality, maybe induction, problems with causal decision theory (obviously), anthropics, simulation...
I’m all about being aware that using implausible hypotheticals can generate error but I am bewildered by the sudden resistance to them on this thread: we use them all the time here!
Ok, let me try and nail down my true objection here. Is Pascal’s wager a good reason to believe in God? No. Hypothetically, if you had good reason to believe that the hypothesis of the christian god existing were massively more likely than other hypotheses of similar complexity, would it be a good reason to believe in god? Well, not really—it doesn’t add much in that case.
Similarly, if Omega showed up at my apartment this evening would I one-box? No. Hypothetically, if I had good reason to believe that an Omega-like entity existed and did this kind of thing (which is the set up for Newcomb’s problem) would I one-box? Well, probably yes but you’ve glossed over the rather radical change to my epistemic state required to make me believe such an implausible thing.
I guess I have a general problem with a certain kind of philosophical thought experiment that tries to sneak in a truly colossal amount of implausibility in its premises and ask you not to notice and then whenever you keep pointing to the implausibility telling you to ignore it and focus on the real question. Well I’m sorry, but the staggering implausibility over there in the corner is more significant than the question you want me to focus on in my opinion… (Forgive the casual use of ‘you’ here—I’m not intending to refer to you specifically).
I don’t understand. A hypothetical can be dangerous if it keeps us from attending to aspects of the problem we’re trying to analyze- like the Chinese Room which fails to convey properly the powers it would have to have for us to declare it conscious. The fact that a hypothetical is implausible might make it harder for us to notice that we’re not attending to certain issues, I guess. That hardly seems grounds for rejecting them outright (indeed, Dennett uses plenty of intuition pumps). And the implausibility itself really is irrelevant. No one is claiming that the hypothetical will occur, so why should the probability of its occurrence be an issue?
Using Newcomb’s problem as an example, it seems like it glosses over important details of how much evidence you would actually need to believe in an Omega like entity and as a result confuses more than it illuminates. Re-reading some of Eliezer’s posts on it I get the impression that he is hinting that his resolution of the issue is connected to that problem. It seems to me that it causes a lot of unnecessary confusion because humans are susceptible to stories that require suspension of disbelief in highly implausible occurrences that they would not actually suspend their disbelief for if encountered in real life. This might be an example of Robin Hanson’s near/far distinction.
Tyler Cowen’s cautionary tale about the dangers of stories covers some of the same kinds of human biases that I think are triggered by implausible hypotheticals.
Using Newcomb’s problem as an example, it seems like it glosses over important details of how much evidence you would actually need to believe in an Omega like entity and as a result confuses more than it illuminates.
It certainly does gloss over that… I mean it has to, you’d require a lot of evidence. But the reason it does so is because the question isn’t could Omega exists or how can we tel when Omega shows up… the details are buried because they aren’t relevant. How does Newcomb’s problem confuse more that illuminate? It illustrates a problem/paradox. We would not be aware of that paradox were it not for the hypothetical. I suppose it confuses in the sense that one becomes aware of a problem they weren’t previously- but that’s the kind of confusion we want.
Tyler Cowen’s cautionary tale about the dangers of stories covers some of the same kinds of human biases that I think are triggered by implausible hypotheticals.
It’s a great video and I’m grateful you linked me to it but I don’t see where the problems with the kind of stories Cowen was discussing show up in thought experiments.
How does Newcomb’s problem confuse more that illuminate? It illustrates a problem/paradox. We would not be aware of that paradox were it not for the hypothetical.
The danger is that you can use a hypothetical to illustrate a paradox that isn’t really a paradox, because its preconditions are impossible. A famous example: Suppose you’re driving a car at the speed of light, and you turn on the headlights. What do you see?
How does Newcomb’s problem confuse more that illuminate? It illustrates a problem/paradox.
It confuses because it doesn’t really show a problem/paradox. That is not obvious because of the peculiar construction of the hypothetical. If you actually had enough evidence to make it seem like one-boxing was the obvious choice then it wouldn’t seem like a paradoxical choice. The problem is people generally aren’t able to imagine themselves into such a scenario and so think they should two-box and then think there is a paradox (because you ‘should’ one-box). They quite reasonably aren’t able to imagine themselves into such a scenario because it is wildly implausible. The paradox is just an artifact of difficulties we have mentally dealing with highly implausible scenarios.
I don’t see where the problems with the kind of stories Cowen was discussing show up in thought experiments.
Specifically what I had in mind was the fact that people seem to have a natural willingness to suspend disbelief and accept contradictory or wildly implausible premises when ‘story mode’ is activated. We are used to listening to stories and we become less critical of logical inconsistencies and unlikely scenarios because they are a staple of stories. Presenting a thought experiment in the form of a story containing a highly implausible scenario takes advantage of a weakness in our mental defenses which exists for story-shaped language and leads to confusion and misjudgement which we would not exhibit if confronted with a real situation rather than a story.
If you actually had enough evidence to make it seem like one-boxing was the obvious choice then it wouldn’t seem like a paradoxical choice. The problem is people generally aren’t able to imagine themselves into such a scenario and so think they should two-box and then think there is a paradox (because you ‘should’ one-box).
No. The choice is paradoxical because no matter how much evidence you have of Omega’s omniscience the choice you make can’t change the amount of money in the box. As such traditional decision theory tells you to two- box because the decision you make can’t affect the amount of money the boxes. No matter how much money is in the boxes you should more by two boxing. Most educated people are causal decision makers by default. So a thought experiment where causal decision makers lose is paradox inducing. If one-boxing was the obvious choice people would feel the need to posit new decision theories as a result.
I disagree, and I think this is what Eliezer is hinting towards now I’ve gone back and re-read Newcomb’s Problem and Regret of Rationality. If you really have had sufficient evidence to believe that Omega is either an omniscient mind reader or some kind of acausal agent such that it makes sense to one-box then it makes sense to one-box. It only look like a paradox because you’re failing to imagine having that much evidence. Which incidentally is not really a problem—an inability to imagine highly implausible scenarios in detail is not generally an actual handicap in real world decision making.
I’m still going to two-box if Omega appears tomorrow though because there are very many more likely explanations for the series of events depicted in the story than the one you are supposed to take as given.
Curiously, what is the average utility you would estimate for belief in God? Or do you feel that trying to estimate this forces suspended disbelief in implausible scenarios?
Which god? The God Of Abraham, Isaac, And Jacob? The Christian, Muslim or Jewish flavour? It would seem this is quite important in the context of Pascal’s wager. Some gods are notoriously specific about the form my belief should take in order to win infinite utility. I don’t see any compelling evidence to prefer any of the more popular god hypotheses over any other, nor to prefer them over the infinitude of other possible gods that I could imagine.
Some of the Norse gods were pretty badass though, they might be fun to believe in.
This strikes me as a rather odd question. I thought we were more or less agreed that beliefs don’t generally have utility. The peculiarity of Pascal’s wager and religious belief in general is that you are postulating a universe in which you are rewarded for holding certain beliefs independently of your actions. In a universe with no god (which I claim is a universe much like our own) belief in god is merely false belief and generally false beliefs are likely to cause bad decisions and thus lead to sub-optimal outcomes.
If the belief in god is completely free-floating and has no implications for actions then it may not have any direct negative effect on expected utility. Presumably given the finite computational capacity of the human brain holding non-consequential false beliefs is a waste of resources and so has slight negative utility. It strikes me that this is not the kind of belief in god that people are usually trying to defend when invoking Pascal’s wager however.
This strikes me as a rather odd question. I thought we were more or less agreed that beliefs don’t generally have utility.
I’m not sure that beliefs don’t generally have utility. It seems to me that beliefs (or something like beliefs) do a lot to organize action. There’s a difference between doing something because of short-term reward and punishment and doing the same thing because one thinks it’s generally a good idea.
Hmm. I think beliefs do have a utility, whether or not you can act on that utility by choosing a belief or whether or not you can accurately estimate the utility. If you believe something, you will act as though you believe it, so that believing in something inherits the utility of acting as though you do. It seems very strange to think of someone acting as though they believe something, without them actually believing it. There are exceptions, but for the most part, if someone bets on a belief, this is because they believe it.
If you believe something, you will act as though you believe it, so that believing in something inherits the utility of acting as though you do.
I don’t in general agree with this. Outcomes have utility, actions have expected utility, beliefs are generally just what you use to try and determine the expected utility of actions. As a rule, true beliefs will allow you to make better estimates of the expected utility of actions.
This is true for ordinary beliefs: I believe it is raining so I expect the action of taking my umbrella to have higher utility than if I did not believe it was raining. It is possible to imagine certain kinds of beliefs that have utility in themselves but these are unusual kinds of beliefs and most beliefs are not of this type. If there is a god who will reward or punish you in the afterlife partly on the basis of whether you believed in him or not then ‘believing in god’ would result in an outcome with positive utility but deciding if you live in such a universe would be a different belief that you would need to come to from other kinds of evidence than Pascal’s wager.
It is possible to imagine other beliefs that could in theory have utility in themselves for humans. For example, it is possible that believing oneself a bit more attractive and more competent than is accurate might benefit ones happiness more than enough to compensate for lost utility due to less accurate beliefs leading to actions with sub-optimal expected utility. If this is true however it is a quirk of human psychology and not a property of the belief in the way that Pascal’s wager works.
It seems very strange to think of someone acting as though they believe something, without them actually believing it.
I don’t find it at all strange to think of someone acting as if they believe in god even though they don’t. This has been common throughout history.
it seems related to the idea of the intuition pump.
Yeah, I think I was always averse to this sort of philosophical sophistry but reading Consciousness Explained probably crystallized my objection to it at a relatively early age.
They both have an element of privileging the hypothesis. If I had some reason to think I lived in a universe with an Omega/God then I might agree I should one-box/believe in god but since I don’t have any reason to think I live in such a universe why am I wasting my time even considering this particular implausible scenario?
I see what you mean, but there exists one of two problems with the symmetry.
First, the most annoying form of Pascal’s Wager is the epistemological version: “Believing that God exists has positive expected utility, so you should do so”. This argument fails logically, for reasons SilasBarta listed, and it is usually this form being refuted when people say, “Pascal’s Wager fails”.
Second, the form of Pascal’s Wager concerning worship, “Believing in God, who is known to exist, has positive utility”, has moral complexities which are absent from Newcomb’s dilemma. Objections in this case usually arise from the normative argument that you should not believe things which are false.
First, the most annoying form of Pascal’s Wager is the epistemological version: “Believing that God exists has positive expected utility, so you should do so”. This argument fails logically, for reasons SilasBarta listed, and it is usually this form being refuted when people say, “Pascal’s Wager fails”.
I disagree that it fails logically. The argument, written modus ponens, is:
“If believing in God has positive expected utility, then you should do so”.
If you don’t believe that believing in God has positive expected utility, then this is not a disagreement in the logic of Pascal’s Wager. Pascal’s Wager would equally say,
“If believing in God has negative expected utility, then you should not do so”.
I disagree that it fails logically. The argument, written modus ponens, is:
“If believing in God has positive expected utility, then you should do so”.
Okay, now I think I’m starting to see the miscommunication: PW does not simply say what you’ve quoted there. It’s typically associated with an argument about how the possibility of infinite utility from believing (and perhaps infinite disutility from not believing) outweights the small probability of it being true, and the utility of other courses of action, on account of its infinite size.
You’re taking “Pascal’s Wager” to refer only to certain premises the argument uses, not the full argument itself.
It occurred to me that you might not agree that my distillation of PW contained all the salient features. (For example, there are no infinitesimals and no infinities written in). However, I think it must have been my more general argument that PeerInfinity was referring to, because he was applying it to atheism.
Good point, I edited my form of the argument to include ‘sets of beliefs’. If having a set of beliefs maximizes your utility, then having the set is what you “should” do, I think, in the spirit of the argument.
Accepting God as a probable hypothesis has a lot of epistemic implications. This is not just one thing, everything is connected, one thing being true implies other things being true, other things being false. You won’t be seeing the world as you currently believe it to be, after accepting such change, you will be seeing a strange magical version of it, a version you are certain doesn’t correspond to reality. Mutilating your mind like this has enormous destructive consequences on your ability to understand the real world, and hence on ability to make the right choices, even if you forget about the hideousness of doing this to yourself. This is the part that is usually overlooked in Pascal’s wager.
(Belief in belief keeps the human believers out of most of the trouble, but that’s not what Pascal’s wager advocates! Not understanding this distinction may lead to underestimating the horror of the suggestion.)
Just a note—don’t take Jack’s advice to not self-censor too literally. There is much weirdness in you, and even the borders of this place would groan under its weight.
The above (below? Depends on your settings, I guess) comment, which is now hidden, involves a poll, and would not (I predict) have otherwise become hidden.
It’s also hidden depending on your settings: you can change the threshold for hiding comments as well. I don’t hide any comments, because seeing a hidden comment makes me so curious I have to click it, and just draws more attention to it for me.
lol, it was sooo tempting to edit that comment and replace the word “penises” with “ice cream”. I guess that would be the reverse of one of the standard internet pranks.
But I didn’t do that, because the negative value from the confusion caused to anyone else reading this thread would probably have outweighed any positive value from the prank being funny.
though, um… please upvote this comment if you want me to go ahead and swap the words “penises” and “ice cream”, in both the previous comment and this comment...
or please downvote this comment if you think that’s a bad idea.
I didn’t upvote or downvote any of these. But I think the result would be the same if you had said “ice cream”: the point is that it’s a completely random comment that has nothing to do with the rationality discussion and distracts from the flow. I don’t think that there’s anything wrong with randomness or silliness but interrupting a rationality conversation with completely unrelated comments could get annoying.
Doing polls for this kind of thing, while somewhat interesting in a meta sense (I definitely like to see discussion about the social norms here), is rather off-topic. It would be less disruptive if you were to start by sticking to the already established norms (which can be learned by observing, and which you can ask me or Alicorn or Blueberry about via IM if you have questions), and occasionally break from the norms to test how other habits of yours are received.
ow. my karma is taking a hit. I should have expected that. And I should have set up another karma-balancing comment. I guess you can use this comment for karma balancing. That means if you voted another comment down, vote this one up, and vice versa.
So far I lost about 10 karma as a result of all these polls. Hopefully that will help reduce the emotional impact of future losses of karma, which would help me get over the paranoia about my comments causing more harm than good. And yes, I do plan to occasionally post comments that I suspect will be downvoted. But not too often, I’m not quite that reckless.
If you are going to make polls like that, the Open Thread is probably a better place to do it. There they won’t distract from the main topic of conversation.
And I’m glad that you’re not scared to post or get downvoted! :)
I agree. But at least that first poll got enough downvotes to block off all the others, for anyone who didn’t disable the feature that auto-hides comments with less than −3 karma.
I haven’t yet seen an answer to Pascal’s Wager on LW that wasn’t just wishful thinking. In order to validly answer the Wager, you would also have to answer Eliezer’s Lifespan Dilemma, and no one has done that.
I’m pretty sure Peer meant the original version of Pascal’s Wager, the argument for Christianity, which has the obvious answer, “What if the Muslims are right? or “What if God punishes us for believing?”
“God punishes us for believing” has a much lower probability, because no one believes it, while many people believe in Christianity.
Why does the probability have anything to do with the number of people who believe it?
“Muslims are right” could easily be more probable, but then there is a new Wager for becoming Muslim.
There’s then the problem that the expected value involves adding multiples of positive infinity (if you choose the right religion) to multiples of negative infinity (if you choose the wrong one), which gives you an undefined result.
The probabilities simply do not balance perfectly. That is basically impossible.
The probability of any kind of God existing is extremely low, and it’s not clear we have any information on what kind of God would exist conditioned on some God existing.
There’s also the problem that if you know the probability that God exists is very small, you can’t believe, you can only believe in belief, which may not be enough for the wager.
The probability has something to do with the number of people who believe it because it is possible that some of those people have a good reason to believe it, which automatically gives it some probability (even if very small.) But for positions that no one believes, this probability is lacking.
That adding positive and negative infinity is undefined may be true mathematically, but you have to decide one way or another. And it is wishful thinking to say that it is just as good to choose the less probable way as the more probable way. For example, there are two doors. One has a 99% chance of giving negative infinite utility, and a 1% chance of positive infinite. The second door has a 1% chance of negative infinite utility, and a 99% chance of positive infinite utility. Defined or not, it is perfectly obvious that you should choose the second door.
We do have information on what kind of God would exist if one existed: it would probably be one of the ones that are claimed to exist. Anyway, as Nick Bostrom points out, even without this kind of evidence, the probabilities still will not balance EXACTLY, since you will have some evidence even from your intuitions and so on.
It may be true that some people couldn’t make themselves believe in God, but only in belief, but that would be a problem with them, not with the argument.
The probability has something to do with the number of people who believe it because it is possible that some of those people have a good reason to believe it, which automatically gives it some probability (even if very small.) But for positions that no one believes, this probability is lacking.
This can’t be right. The number of people who follow any one religion is affected by how people were raised, by cultural and historical trends, by birth rates, and by the geographic and social isolation of the people involved. None of these things have anything to do with truth. Currently Christianity has twice as many people as any other religion because of historical and political facts; you think this makes it more likely than Islam to be true?
Suppose that in 50 years, because of predicted demographic trends, there are twice as many Muslims as Christians. You then seem to be in the strange position of thinking (a) Christianity is more likely to be true now, but (b) because of changing demographics, you will be likely to think Islam is more likely to be true in 50 years.
We do have information on what kind of God would exist if one existed: it would probably be one of the ones that are claimed to exist.
How do people’s claims give you that information? Religions are human cultural inventions. At most one could be true, which means the others have to be made up anyway. If a God did exist, why is it more likely that one of them is true than that they were all made up and humanity never came close to guessing the nature of the God that did exist?
Anyway, as Nick Bostrom points out, even without this kind of evidence, the probabilities still will not balance EXACTLY, since you will have some evidence even from your intuitions and so on.
My intuition tells me that if a God of some sort does exist, the probabilities end up favoring a God that rewards looking at the evidence and believing only what you have reason to be true, but that may just be my bias showing.
Intuition about what religion is true is likely to reflect your upbringing and your culture more than the actual truth. Given that there’s currently no evidence of any kind of God or afterlife, I can’t see how there is any evidence that God X is more likely to exist than God Y.
It may be true that some people couldn’t make themselves believe in God, but only in belief, but that would be a problem with them, not with the argument.
It’s also worth noticing that Pascal’s Wager uses a spherical cow version of religion. Some religious traditions might require actual belief for infinite utility, others just belief in belief, others just certain behavior or words independent of belief.
I’ll answer this later. For now I’ll just point out that you aren’t addressing my position at all, but other things which I never said. For example, I said that if people believe something, this increases its probability. You respond by asking things like “Currently Christianity has twice as many people… you think this makes it more likely than Islam to be true?” I definitely did not say that the probability of a religion is proportional to the number of people who believe it, just that religions that some people believe are more likely than ones that no one believes.
That adding positive and negative infinity is undefined may be true mathematically, but you have to decide one way or another.
Right; or if you don’t decide exactly, at least you have to do (believe or not believe) one or the other.
I would say that the model breaks down. Mathematics (or at least the particular mathematical model being used) is not capable of describing this situation, but that doesn’t make the situation itself meaningless. (That would be a version of the map/territory fallacy.)
Defined or not, it is perfectly obvious that you should choose the second door.
Here I disagree with you. I would say that you have not given enough information. It is as if you gave the same problem statement but with the word ‘infinite’ removed (so that we only know whether the utilities are positive or negative). It may seem as if you have given all of the information: the probabilities and the utilities. But the mathematics which we use to calculate everything else out of those values breaks down, so in fact you have not given all of the information.
One important missing piece of information is the ratio of the first positive utility to the second. That and two other independent ratios would be enough information, if they’re all finite. (If not, then we might need more information.)
And don’t tell me that these ratios are undefined; the mathematical model that calculates the ratios from the information given breaks down, that’s all. In fact, there is an atlernative mathematical model of decision which deals only in ratios between utilities; if you’d followed that model from the beginning, then you would never have tried to state the actual utilities themselves at all. (For mathematicians: instead of trying to plot these 4 utilities in a 4-dimensional affine space, plot them in a 3-dimensional projective space.)
It may be true that some people couldn’t make themselves believe in God, but only in belief, but that would be a problem with them, not with the argument.
Right; the proper conclusion of the argument is not to believe, but to try to believe. And if you buy the argument, then you should try very hard!
I agree with everything you’ve said here, including that in the two door situation the decision could go the other way if you had more information about the ratio of the utilities. Still, it seems to me that what I said is right in this way: if you are given no other information except as stated, you should choose the second door, because your best estimate of the ratios in question will be 1-1. But if you have some other evidence regarding the ratios, or if they are otherwise specified in the problem, your argument is correct.
If you read the article and the comments, you will see that no one really gave an answer.
As far as I can see, it absolutely requires either a bounded utility function (which Eliezer would consider scope insensitivity), or it requires accepting an indefinitely small probability of something extremely good (e.g. Pascal’s Wager).
If you believe that there is something with arbitrarily high utility, then by definition, you will accept an indefinitely small probability of it.
Assume my life has a utility of 10 right now. My preferences are such that there is absolutely nothing I would take a 99% chance of dying for. Then, by definition, there’s nothing with a utility of 1000 or more. The problem comes from assuming that there is such a thing when there isn’t. I don’t see how this is scope insensitivity; it’s just how my preferences are.
Someone who really had an unbounded utility function would really take as many steps down the Lifespan Dilemma path as Omega allowed. That’s really what they’d prefer. Most of us just don’t have a utility function like that.
So you wouldn’t die to save the world? Or do you mean hypothetically if you had those preferences?
I agree with the basic argument, it is the same thing I said. But Eliezer at least does not, since he has asserted a number of times that his utility function is unbounded, and that it allows for arbitrarily high utilities.
So you wouldn’t die to save the world? Or do you mean hypothetically if you had those preferences?
If the world is doomed immediately unless I die for it, I have a 100% chance of dying immediately, so I might as well die to save the world. But if it’s a choice between living another 50 years and then the world ending, or dying right now and saving the world, and no one would know, I wouldn’t die to save the world. I’m too selfish for that.
But Eliezer at least does not, since he has asserted a number of times that his utility function is unbounded, and that it allows for arbitrarily high utilities.
Then he should keep taking Omega’s offers, and any discomfort he has with that is faulty intuition, like the discomfort from choosing TORTURE over SPECKS.
I would die right now to prevent the world from ending 50 years from now. It’s actually even hard for me to imagine that you’re actually as selfish as you say. If the situation actually came up you might find out differently. But I guess it’s possible.
You might be right that Eliezer should simply accept the Lifespan dilemma as the necessary consequence of his utility function (at least as he defines it.)
It’s actually even hard for me to imagine that you’re actually as selfish as you say.
Really? Why? I can’t imagine myself dying to save the world; it’s completely implausible to me and I have a hard time understanding what it would feel like to be willing to do so. But people often die for much less.
It’s simple. The ‘selfish’ terminology is just obscuring matters. Just keep your feelings about one thing (your life) and substitute it with something else (someones life).
Unknowns utility functions is of a type that it assigns infinitely high utility to saving the world. Not saving the world is simply no option. That’s what Unknowns wants.
Edit: Forget what I said about Unknowns previously.
The standard answer is “But what if the Muslims are right?” You can’t be both a Christian and a Muslim, and you lose by guessing wrong. We have no more reason to believe we’ll be rewarded for believing in God X than we have to believe we’ll be punished for believing in God X, as we would be if God Y were the correct one.
All this does is show that the dilemma must have a flaw somewhere, but it doesn’t explicitly show that flaw. The same problem occurs with finding the flaws in proposed perpeptual motion machines, you know there must be a flaw somewhere, but it’s often tricky to find it.
I think the flaw in Pascal’s wager is allowing “Heaven” to have infinite utility. Unbounded utilities, fine; infinite utilities, no.
“The original problem with Pascal’s Wager is not that the purported payoff is large. This is not where the flaw in the reasoning comes from. That is not the problematic step. The problem with Pascal’s original Wager is that the probability is exponentially tiny (in the complexity of the Christian God) and that equally large tiny probabilities offer opposite payoffs for the same action (the Muslim God will damn you for believing in the Christian God). ”
This is just wishful thinking, as I said in another reply. The probabilities do not balance.
What about “living forever”? According to Eliezer, this has infinite utility. I agree that if you assign it a finite utility, then the lifespan dilemma fails (at some point), and similarly, if you assign “heaven” a finite utility, then Pascal’s Wager will fail, if you make the utility of heaven low enough.
I used to know one, and have done a bit of reading about it. It struck me as a reversed-stupidity version of Christianity, though there were a few interesting memes in the literature.
Depending upon the Type of Satanist, yes, they are often just people looking for a high “Boo-Factor” (A term made-up by many of the early followers of a musical Genre called “Deathrock” (it’s more public name is now Goth, although that is like comparing a chain saw to a kitchen pealing knife—the “Goths” are the kitchen knife).
Many Satanists, especially those who hadn’t really read much of the published Satanic literature would just make something up themselves and it was almost always based in Christian motifs and archetypes. The two institutions who have publicly claimed the title of “Satanist” (The Church of Satan and The Temple of Set) both reject any and all of Christian Theology, Motifs, Archetypes, Symbolism and Characters as being ingenuous and twisted archetypes of older more healthy god archetypes (If you read Jung and Joseph Campbell, this is not uncommon for a rising religious paradigm to hijack an older competing paradigm as its bad-guys)
As Phil has suggested, maybe a front page post will come in handy. It should be recognized that some Satanists happen to be very rational people. They are just using the symbolism to manipulate their environment (although most of the more mature ones have found more mature symbols with which to manipulate the environment and their peers and subordinates).
The types to which I was referring in my post were the Christian Satanists (people who are worshiping the Christian version of Satan), which is just as bad as worshiping the Christian God. Both the Christian God and the Christian Satan are required for that mythology to be complete.
Well, they both (according to Christian Myth) are truly bad characters.
It is unfortunate for God that Satan (Lucifer) had such a reasonable request “Gee, Jehovah, It would certainly be nice if you let us try out that chair every once in a while.”
Basically, Lucifer’s crime was one that is only a crime in a state where the King is seen as having divine authority to rule, and all else is seen as beneath such things (thus reflecting the Divine Order)
It was this act upon which Modern Satanists seized to create a new mythology for Satanism, where it was reason rebelling again an order that was corrupt and tyrannical.
It is unfortunate for God that Satan (Lucifer) had such a reasonable request “Gee, Jehovah, It would certainly be nice if you let us try out that chair every once in a while.” Basically, Lucifer’s crime was one that is only a crime in a state where the King is seen as having divine authority to rule, and all else is seen as beneath such things (thus reflecting the Divine Order)
To be fair this stuff isn’t Christian mythology in the way that Adam and Eve, or Loaves and Fishes is Christian mythology. It’s just religious fiction.
...
Unless someone has declared John Milton a prophet and possessor of divine revelation. Which would be hilarious.
It isn’t stuff that made it into the modern canon, but in the Early Christian Church, Myth of this type appeared all over the place from the Jewish Sources, in an attempt to integrate it into various Christian Sects.
To be fair this stuff isn’t Christian mythology in the way that Adam and Eve, or Loaves and Fishes is Christian mythology. It’s just religious fiction.
Well, they both (according to Christian Myth) are truly bad characters.
The Christian Myth includes a quite specific definition of bad so according to the Christian myth only one of them is bad. Is what you mean that according to you the characters as described in the Christian Myth were both truly bad?
Basically, Lucifer’s crime was one that is only a crime in a state where the King is seen as having divine authority to rule
That description loses something when the ruler is, in fact, God. One of the bad things about claiming that the king is king because God says so is that it is not the case that any god said any such thing. When the ruler is God then yes, God does say so. The objection that remains is “Who gives a @$@# what God says?” I agree with what (I think) you are saying about the implications of claims of authority but don’t like the loaded language. It confuses the issue and well, I would say that technically (that counterfactual) God does have the divine authority to rule. It’s just that divine authority doesn’t count for squat in my book.
There are Christian Satanists? Correct me if I’m wrong, but I thought Satanism was a religion founded around Rand-like rational selfishness, and explicitly denied any supernatural entities.
Yes, they are “Christian” in the sense that all of the mythology and practices for their worship of Satan are derived from Christianity, and they still believe in a Christian God.
It is just that these people believe that they are defying and opposing the Christian God (Fighting for the other team). They still believe in this God, just no longer have it as the object of their worship and devotion.
This is also the more traditional form of Satanist in our society, and one which the more modern Satanist tends to oppose. The Modern Satanist is a self-worshiping atheist, and as has been pointed out, tend to place everything in the context of self-interest. It is a highly utilitarian philosophy, but often marred in actual practice by ignorant fools who don’t seem to understand the difference between just acting like a selfish dick and acting out of self-interest (doing things which improve one’s condition in life, not things which worsen one’s condition)
There’s an Ayn Rand quote I don’t have handy to the effect that if the virtues needed for life are considered evil, people are apt to embrace actual evils in response.
You would be surprised about how rational the real Satanists (and their various offshoots and schisms) are (as the non-Christian based Satanist is an athiest).
In fact, the very first Schism of the Church of Satan gave birth to the Temple of Set (Founded by the then head of the Army’s Psychological Warfare Division), which was described as a “Hyper-Rational Belief System” (Although in reality it still had some rather unfortunately insane beliefs among its constituents). The Founder was very rational though. He even had quite a bit of science behind his position… It’s just that his job caused him to be a rather creepy and scary guy.
It may just be me, but why do you need to find someone to follow?
I have always found that forging my own path through the wilderness to be far more enjoyable and yield far greater rewards that following a path, no matter how small or large that path may be.
Well, one reason why I feel that I need someone to follow is… severe underconfidence in my ability to make decisions on my own. I’m still working on that. Choosing a person to follow, and then following them, feels a whole lot easier than forging my own path.
I should mention again that I’m not actually “following” Eliezer in the traditional sense. I used his value system to bootstrap my own value system, greatly simplifying the process of recovering from christianity. But now that I’ve mostly finished with that (or maybe I’m still far from finished?), I am, in fact, starting to think independently. It’s taking a long time for me to do this, but I am constantly looking for things that I’m doing or believing just because someone else told me to, and then reconsidering whether these things are a good idea, according to my current values and beliefs. And yes, there are some things I disagree with Eliezer about (the “true ending” to TWC, for example), and things that I disagree with SIAI about (“we’re the only place worth donating to”, for example). I’ll probably start writing more about this, now that I’m starting to get over my irrational fear of posting comments here.
Though part of me is still worried about making SIAI look bad. And I’m still worried that the stuff I’ve already posted may end up harming SIAI’s mission (and my mission) more than it could possibly have helped. Though of course it would be a bad idea to try to hide problems that need to be examined and dealt with. And the idea of deliberately trying to hide information just feels wrong. It feels like Dark Arts. I should also mention that the idea of deliberately not saying things, in order to avoid making the group look bad, isn’t actually something I was told by anyone from SIAI, I think it was a bad habit I brought with me from christianity.
If by ‘dark arts’ you mean ‘non-rational methods of persuasion’, such things may be ethically questionable (in general; not volunteering information you aren’t obligated to provide almost certainly isn’t) but are not (categorically) wrong. Rational agents win.
I like the way steven0461 put it:
I think I agree with both khafra and Nick.
I like this quote, and I’ve used it before in conversations with other people.
I think it’s worth distinguishing between “underconfidence” and “lack of confidence”—the former implies the latter (although not absolutely), but under some circumstances you are justified in questioning your competence. Either way, it sounds like you’re working on both ends of that balance, which is good.
I think this is good thinking.
good point about underconfidence versus lack of confidence, thanks
That puts it into an understandable context… I can’t quite understand about the having to shake off Christian Beliefs. I was raised with a tremendously religious mother, but about the age of 6 I began to question her beliefs and by 14 was sure that she was stark raving mad to believe what she did. So, I managed to keep from being brainwashed to begin with.
I’ve seen the results of people who have been brainwashed and who have not managed to break completely free from their old beliefs. Most of them swung back and forth between the extremes of bad belief systems (From born-again Christian to Satanist, and back, many times)… So, what you are doing is probably best for the time being, until you learn the tools needed to step off into the wilderness by yourself.
In my case, I knew pretty much from the beginning that something was seriously wrong. But since every single person I had ever met was a christian (with a couple of exceptions I didn’t realize until later), I assumed that the problem was with me. The most obvious problem, at least for me, was that none of the so-called christians was able to clearly explain what a christian is, and what it is that I need to do in order to not go to hell. And the people who came closest to being able to give a clear explanation, they were all different from each other, and the answer changed if I asked different questions. So I guess I was… partly brainwashed. I knew that there was something really important I was supposed to do, and that people’s souls were at stake (a matter of infinite utility/anti-utility!) but noone was able to clearly explain what it was that I was supposed to do. But they expected me to do it anyway, and made it sound like there was something wrong with me for not instinctively knowing what it was that I was supposed to do. There’s lots more I could complain about, but I guess I had better stop now.
So it was pretty obvious that I wasn’t going to be able to save anyone’s soul by converting them to christianity by talking to them. And I was also similarly unqualified for most of the other things that christians are supposed to do. But there was still one thing I saw that I could do: living as cheaply as possible, and donating as much money as possible to the church so that the people who claim to actually know what they’re doing can just get on with doing it. And just being generally helpful when there was some simple everyday thing I could be helpful with.
Anyway, it wasn’t until I went to university that I actually met any atheists who openly admitted to being atheists. Before then, I had heard that there was such a thing as an atheist, and that these were the people whose souls we were supposed to save by converting them to christianity, but Pascal’s Wager prevented me from seriously considering becoming an atheist myself. Even if you assign a really tiny probability to christianity being true, converting to atheism seemed like an action with an expected utility of negative infinity. But then I overheard a conversation in the Computer Science students’ lounge. That-guy-who-isn’t-all-that-smart-but-likes-to-sound-smart-by-quoting-really-smart-people was quoting Eliezer Yudkowsky. Almost immediately after that conversation, I googled the things he was talking about. I discovered Singularitarianism. An atheistic belief system, based entirely on a rational, scientific worldview, to which Pascal’s Wager could be applied. (there is an unknown probability that this universe can support an infinite amount of computation, therefore there is an unknown probability that actions can have infinite positive or negative utility.) I immediately realized that I wanted to convert to this belief system. But it took me a few weeks of swinging back and forth before I finally settled on Singularitarianism. And since then I haven’t had any desire at all to switch back to christianity. Though I was afraid that, because of my inability to stand up to authority figures, someone might end up convincing me to convert back to christianity against my will. Even now, years later, there are scary situations, when dealing with an authority figure who is a christian, part of me still sometimes thinks “OMG maybe I really was wrong about all this!”
Anyway, I’m still noticing bad habits from christianity that I’m still doing, and I’m still working on fixing this. Also, I might be oversensitive to noticing things that are similar between christianity and Singularitarianism. For example, the expected utility of “converting” someone to Singularitarianism. Though in this case you’re not guaranteeing that one soul is saved, you’re slightly increasing the probability that everyone gets “saved”, because there is now one more person helping the efforts to help us achieve a positive Singularity.
Oh, and now, after reading LW, I realize what’s wrong with Pascal’s Wager, and even if I found out for certain that this universe isn’t capable of supporting an infinite amount of computation, I still wouldn’t be tempted to convert back to christianity.
Random trivia: I sometimes have dreams where a demon, or some entirely natural thing that for some reason is trying to look like a demon, is trying to trick or scare me into converting back to christianity. And then I discover that the “demon” was somehow sent by someone I know, and end up not falling for it. I find this amusingly ironic.
As usual, there’s lots more I could write about, but I guess I had better stop writing for now.
Here’s a quote from an old revision of Wikipedia’s entry on The True Believer that may be relevant here:
A core principle in the book is Hoffer’s insight that mass movements are interchangeable; he notes fanatical Nazis later becoming fanatical Communists, fanatical Communists later becoming fanatical anti-Communists, and Saul, persecutor of Christians, becoming Paul, a fanatical Christian. For the true believer the substance of the mass movement isn’t so important as that he or she is part of that movement.
And from the current revision of the same article:
Hoffer quotes extensively from leaders of the Nazi and communist parties in the early part of the 20th Century, to demonstrate, among other things, that they were competing for adherents from the same pool of people predisposed to support mass movements. Despite the two parties’ fierce antagonism, they were more likely to gain recruits from their opposing party than from moderates with no affiliation to either.
Can’t recommend this book enough, by the way.
Thanks for the link, and the summary. Somehow I don’t find that at all surprising… but I still haven’t found any other cause that I consider worth converting to.
At the time I converted, Singularitarianism was nowhere near a mass movement. It consisted almost entirely of the few of us in the SL4 mailing list. But maybe the size of the movement doesn’t actually matter.
And it’s not “being part of a movement” that I value, it’s actually accomplishing something important. There is a difference between a general pool of people who want to be fanatical about a cause, just for the emotional high, and the people who are seriously dedicated to the cause itself, even if the emotions they get from their involvement are mostly negative. This second group is capable of seriously examining their own beliefs, and if they realize that they were wrong, they will change their beliefs. Though as you just explained, the first group is also capable of changing their minds, but only if they have another group to switch to, and they do this mostly for social reasons.
Seriously though, the emotions I had towards christianity were mostly negative. I just didn’t fit in with the other christians. Or with anyone else, for that matter. And when I converted to Singularitarianism, I didn’t exactly get a warm welcome. And when I converted, I earned the disapproval of all the christians I know. Which is pretty much everyone I have ever met in person. I still have not met any Singularitarian, or even any transhumanist, in person. And I’ve only met a few atheists. I didn’t even have much online interaction with other transhumanists or Singularitarians until very recently. I tried to hang out in the SL4 chatroom a few years ago, but they were openly hostile to the way I treated Singularitarianism as another belief system to convert to, another group to be part of, rather than… whatever it is that they thought they were doing instead. And they didn’t seem to have a high opinion of social interaction in general. Or maybe I’m misremembering this.
Anyway, I spent my first approximately 7 years as a Singularitarian in almost complete isolation. I was afraid to request social interaction for the sake of social interaction, because somehow I got the idea that every other Singularitarian was so totally focused on the mission that they didn’t have any time at all to spare to help me feel less lonely, and so I should either just put up with the loneliness or deal with it on my own, without bothering any of the other Singularitarians for help. The occasional attempt I made to contact some of the other Singularitarians only further confirmed this theory. I chose the option of just putting up with the loneliness. That may have been a bad decision.
And just a few weeks ago, I found out that I’m “a valued donor”, to SIAI. Though I’m still not sure what this means. And I found out that other Singularitarians do, in fact, socialize just for the sake of socializing. And I found out that most of them spend several hours a day “goofing off”. And that they spend a significant percentage of their budget on luxuries that technically they could do without, without having a significant effect on their productivity. And that most of them live generally happy, productive, and satisfying lives. And that it was silly of me to feel guilty for every second and every penny that I wasted on anything that wasn’t optimally useful for the mission. In addition to the usual reasons why feeling guilty is counterproductive
Anyway, things are finally starting to get better now, and I don’t think I’ll accomplish anything by complaining more.
Also, most of this was probably my own fault. It turns out that everyone living at the SIAI house was totally unaware of my situation. And this is mostly my fault, because I was deliberately avoiding contacting them, because I was afraid to waste their time. And wasting the time of some one who’s trying to save the universe is a big no-no. I was also afraid that if I tried to contact them, then they would ask me to do things that I wasn’t actually able to do, but wouldn’t know for sure that I wasn’t able to do, and would try anyway because I felt like giving up wasn’t an option. And it turns out this is exactly what happened. A few months ago I contacted Michael Vassar, and he started giving me things to help with. I made a terrible mess out of trying to arrange the flights for the speakers at the 2009 Singularity Summit. And then I went back to avoiding any contact with SIAI. Until Adelene Dawner talked to them for me, without me asking her to. Thanks Ade :)
Um… one other thing I just realized… well, actually Adelene Dawner just mentioned it in Wave, where I was writing a draft of this post… the reason why I haven’t been trying to socialize with people other than Singularitarians is… I was afraid that anyone who isn’t a Singularitarian would just write off my fanaticism as general insanity, and therefore any attempt to socialize with non-Singularitarians would just end up making the Singularitarian movement look bad… I already wrote about how this is a bad habit I carried with me from christianity. It’s strange that I hadn’t actually spent much time thinking about this, I just somehow wrote it off as not an option, to try to socialize with non-Singularitarians, and ended up just not thinking about it after that. I still made a few careful attempts at socializing with non-Singularitarians, but the results of these experiments only confirmed my suspicions.
Oh, and another thing I just realized: Confirmation Bias. These experiments were mostly invalid, because they were set up to detect confirming evidence of my suspicions, but not set up to be able to falsify them. oops. I made the same mistake with my suspicions that normal people wouldn’t be able to accept my fanatical Singularitarianism, my suspicions that the other Singularitarians are all so totally focused on the mission that they don’t have any time at all for socializing, and also my suspicions that my parents wouldn’t be able to accept my atheism. yeah, um, oops. So I guess it would be really silly of me to continue blaming this situation on other people. Yes, it may have been theoretically possible for someone else to notice and fix these problems, but I was deliberately taking actions that ended up preventing them from having a chance to do so.
There’s probably more I could say, but I’ll stop writing now.
um… after reviewing this comment, I realize that the stuff I wrote here doesn’t actually count as evidence that I don’t have True Believer Syndrome. Or at least not conclusive evidence.
oh, and did I mention yet that I also seem to have some form of Saviour Complex? Of course I don’t actually believe that I’m saving the world through my own actions, but I seem to be assigning at least some probability that my actions may end up making the difference between whether our efforts to achieve a positive Singularity succeed or fail.
but… if I didn’t believe this, then I wouldn’t bother donating, would I?
Do other people manage to believe that their actions might result in making the difference between whether the world is saved or not, without it becoming a Saviour Complex?
PeerInfinity, I don’t know you personally and can’t tell whether you have True Believer Syndrome. I’m very sorry for provoking so many painful thoughts… Still. Hoffer claims that the syndrome stems from lack of self-esteem. Judging from what you wrote, I’d advise you to value yourself more for yourself, not only for the faraway goals that you may someday help fulfill.
no need to apologise, and thanks for pointing out this potential problem.
(random trivia: I misread your comment three times, thinking it said “I know you personally can’t tell whether you have True Believe Syndrome”)
as for the painful thoughts… It was a relief to finally get them written down, and posted, and sanity-checked. I made a couple attempts before to write this stuff down, but it sounded way too angry, and I didn’t dare post it. And it turns out that the problem was mostly my fault after all.
oh, and yeah, I am already well aware that I have dangerously low self-esteem. but if I try to ignore these faraway goals, then I have trouble seeing myself as anything more valuable than “just another person”. Actually I often have trouble even recognizing that I qualify as a person...
also, an obvious question: are we sure that True Believer Syndrome is a bad thing? or that a Saviour Complex is a bad thing?
random trivia: now that I’ve been using the City of Lights technique for so long, I have trouble remembering not to use a plural first-person pronoun when I’m talking about introspective stuff… I caught myself doing that again as I checked over this comment.
I’m pretty sure of that. Not because of what it does to your goals, but because of what it does to you.
Please forgive my ignorance, or possibly my deliberate forgetfulness, but… can you please remind me what you think it does to me?
Several comments above you wrote that both Christianity and Singularitarianism drained you of the resources you could’ve spent on having fun. As far as I can understand, neither ideology gave you anything back.
At first I misread what you said and was about to reply with this paragraph:
oh. that’s mostly because I was Doing It Wrong. I was pushing myself harder than I could actually sustain in the long term, and that ended up being counterproductive to singularitarianism. ( and also counterproductive to fun, though I still don’t consider fun to be of any significant inherent value, compared to the value of the mission)
But then I noticed that when I read your comment, I was automatically adding the words “and this would be bad for the mission”, which probably isn’t what you meant.
and I might as well admit that as I was thinking about what else to say in reply, everything I thought of was phrased in terms of what mattered to singularitarianism. I was going to resist the suggestion that I should be paying any attention to what the ideology could give back. I was going to resist the suggestion that fun had any use other than helping me stay focused on the mission, if used in moderation.
And I’m still undecided about whether this reaction is a bad thing, because I’m still measuring good and bad according to singularitarian values, not according to selfish values. And I would still resist any attempt to change my values to anything that might conflict with singularitarianism, even in a small way.
ugh… even if everyone from SIAI told me to stop taking this so seriously, I would probably still resist. And I might even consider this as a reason to doubt how seriously they are taking the mission.
ok, so I guess it would be silly of me to claim that I don’t have a true believer’s complex, or a saviour complex, or just fanaticism in general.
though I still need to taboo the word “fanaticism”… I’m still undecided about whether I’m using it as if it means “so sincerely dedicated that the dedication is counterproductive”, or “so sincerely dedicated that anyone who hasn’t tried to hack their own mind into being completely selfless would say that I’m taking this way too far”.
By the first definition, I would of course consider my fanaticism to be counterproductive and harmful. But I would naturally treat the second definition as an example of other people not taking the mission seriously enough.
And now I’m worrying that all this stuff I’m saying is actually not true, and is really just an attempt to signal how serious and dedicated I am to the mission. Actually, yeah, I would be really surprised if there wasn’t any empty signalling going on, and if the signalling wasn’t causing my explanations to be inaccurate.
In other news, I’m really tired at the moment, but I’m pushing myself to type this anyway, because it feels really important and urgent.
I think there was more I wanted to say, but whatever it was, I forget it now, and this comment is already long, and I’m tired, so I’ll stop writing for now.
Say it was the case that promoting a singularity was a bad idea and that, in particular, SIAI did more harm than good. If someone had compelling evidence of this and presented it to you would you be capable of altering your beliefs and behavior in accordance with this new data? I take it the True Believer would not and that we can all agree with would be a bad thing.
ah, but Singularitarianism is different: a True Singularitarian is supposed to be able to update on this evidence, even if it means abandoning SIAI entirely.
Presented with evidence of the counterproductivity of SIAI, a True Singularitarian would then try to find a better way to help the efforts to achieving a positive Singularity, even if it meant creating an entirely new group for this purpose.
Note that “Singularitarian” is not the same as “SIAI Supporter”, or “Eliezer Follower”
actually, I think the same applies to a True Christian. If a True Christian finds out that the church isn’t doing its job properly, and the church refuses to correct what’s wrong, then the True Christian is supposed to start their own church. And this actually happened many times through history...
Maybe instead of imagining your actions as having some probability of ‘making the difference,’ try thinking of them as slightly boosting the probability of a positive singularity?
At any rate, the survival of someone wheeled in through the doors of a hospital might depend on the EMTs, the nurses, the surgeons, the lab techs, the pharmacists, the janitors and so on and so on. I’d say they’re all entitled to take a little credit without being accused of having a savior complex!
um… can you please explain what the difference is, between “having some probability X of making the difference between success and failure, of achieving a positive Singularity” and “boosting the probability of a positive Singularity, by some amount Y”? To me, these two statements seem logically equivalent. Though I guess they focus on different details...
oh, I just noticed one obvious difference: X is not equal to Y
Yeah, what I wrote was intended as an alternative way of thinking about the situation that might make you feel better, rather than an accusation of wrongness.
I guess I’ll still need to think about this some more...
some random observations:
if X > 0, then Y > 0
if Y > 0, then X > 0
I was about to question whether maybe X = Y after all, but further thought reveals that X isn’t clearly defined, and I really would be better off focusing on Y, because Y is more clearly defined than X, and thinking about Y seems to trigger less panic than thinking about X.
So, yeah, thanks again for your comment. It was helpful. :)
No problem!
Nitpick for clarity’s sake: I’ve seen no evidence that this was deliberate in the sense implied, and I would expect to have seen such evidence if it did exist. It may have been deliberate or quasi-deliberate for some other reason, such as social anxiety (which I have seen evidence of).
er, yes, that’s what I meant. sorry for the confusion. I wasn’t deliberately trying to prevent anyone from helping, I was deliberately trying to avoid wasting their time, by having no contact with them, which prevented them from being able to help.
I’ve heard from an ex-fundamentalist that for some people, conversion is a high in itself (I don’t know if this is mostly true for Christians, or applies to movements in general. In any case, he said the high lasts for about two years, and then wears off, so that those people then convert to something else.
Huh. I knew this was true of me, but didn’t realize it was common. I went from being an extreme Christian at 11 to an extreme utilitarian by about 14 (despite not knowing people who were extreme about either thing).
PeerInfinity, I’m rather struck by a number of similarities between us:
I, too, am a programmer making money and trying to live frugally in order to donate to high-expected-value projects, currently SIAI.
I share your skepticism about the cause and am not uncomfortable with your 1% probability of positive Singularity. I agree SIAI is a good option from an expected-value perspective even if the mainline-probability scenario is that these concerns won’t materialize.
As you might guess from my user name, I’m also a Utilitronium-supporting hedonistic utilitarian who is somewhat alarmed by Eliezer’s change of values but who feels that SIAI’s values are sufficiently similar to mine that it would be unwise to attempt an alternative friendly-AI organization.
I share the seriousness with which you regard Pascal’s wager, although in my case, I was pushed toward religion from atheism rather than the other way around, and I resisted Christian thinking the whole time I tried to subscribe to it. I think we largely agree in our current opinions on the subject. I do sometimes have dreams about going to the Christian hell, though.
I’m not sure if you share my focus on animal suffering (since animals outnumber current humans by orders of magnitude) or my concerns about the implications of CEV for wild-animal suffering. Because of these concerns, I think a serious alternative to SIAI in cost-effectiveness is to donate toward promoting good memes like concern about wild animals (possibly including insects) so that, should positive Singularity occur, our descendants will do the right sorts of things according to our values.
Hi Utilitarian!
um… are you the same guy who wrote those essays at utilitarian-essays.com? If you are, we have already talked about these topics before. I’m the same Peer Infinity who wrote that “interesting contribution” on Singularitarianism in that essay about Pascal’s Wager, the one that tried to compare the different religions to examine which of them would be the best to Wager on.
And, um… I used to have some really nasty nightmares about going to the christian hell. But then, surprisingly, these nightmares somehow got replaced with nightmares of a hell caused by an Evil AI. And then these nightmares somehow got replaced with nightmares about the other hells that modal realism says must already exist in other universes.
I totally agree with you that the suffering of humans is massively outweighed by the suffering of other animals, and possibly insects, by a few orders of magnitude, I forget how many exactly, but I think it was less than 10 orders of magnitude. But I also believe that the amount of positive utility that could be achieved through a positive Singularity is… I think it was about 35 orders of magnitude more than all of the positive or negative utility that has been experienced so far in the entire history of Earth. But I don’t remember the details of the math. For a few years now I was planning to write about that, but somehow never got around to it. Well, actually, I did make one feeble attempt to do the math, but that post didn’t actually make any attempt to estimate how many orders of magnitude were involved
Oh, and I totally share your concerns about the possible implications of CEV. Specifically, that it might end up generating so much negative utility that it outweighs the positive utility, which would mean that a universe completely empty of life would be preferable.
Oh, and I know one other person who shares your belief that promoting good memes like concern about wild animals would be more cost effective than donating to Friendly AI research. He goes by the name MetaFire Horsley in Second Life, and by the name MetaHorse in Google Wave. I have spent lots of time discussing this exact topic with him. I agree that spreading good memes is totally a good idea, but I remain skeptical about how much leverage we could get out of this plan, and I suspect that donating to Friendly AI research would be a lot more leveraged. But it’s still totally a good idea to spread positive memes in your spare time, whenever you’re in a situation that gives you an opportunity to do some positive meme spreading. MetaHorse is currently working on some sci-fi stories that he hopes will be useful for spreading these positive memes. He writes these stories in Google Wave, which means that you can see him writing the stories in real-time, and give instant feedback. I really think it would be a good idea for you to get in contact with him. If you don’t already have a Google Wave account, please send me your gmail address in a private email, and I’ll send you a Wave invite.
Oh, and I’m still really confused about how CEV is supposed to work. It seems like it’s supposed to take into our account our beliefs that the suffering of animals, or any sentient creatures, is unacceptable, and consider that as a source of decoherence if someone else advocates an action that would result in suffering. And apparently it’s not supposed to just average out everyone’s preferences, it’s supposed to… I don’t know what, exactly, but it’s supposed to have the same or better results than if we spent lots and lots of time talking with the people who would advocate suffering, and we all learned more, were smarter, and “grew up further together”, whatever that means, and other stuff. And that sounds nice in theory, but I’m still waiting for a more detailed specification. It’s been a few years since the original CEV document was published, and there haven’t been any updates at all. Well, other than Eliezer’s posts to LW.
Oh, and I read all of your essays (yes, all of them, though I only skimmed that really huge one that listed lots of numbers for the amount of suffering of animals) a few months ago, and we chatted about them briefly. Though that was long enough ago that it would probably be a good idea for me to review them.
Anyway, um… keep up the good work, I guess, and thanks for the feedback. :)
Bostrom’s estimate in “Astronomical Waste” is “10^38 human lives [...] lost every century that colonization of our local supercluster is delayed,” given various assumptions. Of course, there’s reason to be skeptical of such numbers at face value, in view of anthropic considerations, simulation-argument scenarios, etc., but I agree that this consideration probably still matters a lot in the final calculation.
Still, I’m concerned not just with wild-animal suffering on earth but throughout the cosmos. In particular, I fear that post-humans might actually increase the spread of wild-animal suffering through directed panspermia or lab-universe creation or various other means. The point of spreading the meme that wild-animal suffering matters and that “pristine wilderness” is not sacred would largely be to ensure that our post-human descendants place high ethical weight on the suffering that they might create by doing such things. (By comparison, environmental preservationists and physicists today never give a second thought to how many painful experiences are or would be caused by their actions.)
As far as CEV, the set of minds whose volitions are extrapolated clearly does make a difference. The space of ethical positions includes those who care deeply about sorting pebbles into correct heaps, as well as minds whose overriding ethical goal is to create as much suffering as possible. It’s not enough to “be smarter” and “more the people we wished we were”; the fundamental beliefs that you start with also matter. Some claim that all human volitions will converge (unlike, say, the volitions of humans and the volitions of suffering-maximizers); I’m curious to see an argument for this.
Who are you thinking of? (Eliezer is frequently accused of this, but has disclaimed it. Note the distinction between total convergence, and sufficient coherence for an FAI to act on.)
(edit: The version of utilitarianism I’m talking about in this comment is total hedonic utilitarianism. Maximize the total amount of pleasure, minimize the total amount of pain, and don’t bother keeping track of which entity experiences the pleasure or pain. A utilitronium shockwave scenario based on preference utilitarianism, and without any ethical restrictions, is something that even I would find very disturbing.)
I totally agree!!!
Astronomical waste is bad! (or at least, severely suboptimal)
Wild-animal suffering is bad! (no, there is nothing “sacred” or “beautiful” about it. Well, ok, you could probably find something about it that triggers emotions of sacredness or beauty, but in my opinion the actual suffering massively outweighs any value these emotions could have.)
Panspermia is bad! (or at least, severely suboptimal. Why not skip all the evolution and suffering and just create the end result you wanted? No, “This way is more fun”, or “This way would generate a wider variety of possible outcomes” are not acceptable answers, at least not according to utilitarianism.)
Lab-universes have great potential for bad (or good), and must be created with extreme caution, if at all!
Environmental preservationists… er, no, I won’t try to make any fully general accusations about them. But if they succeed in preserving the environment in its current state, that would involve massive amounts of suffering, which would be bad!
I also agree with your concerns about CEV.
Though of course we’re talking about all this as if there is some objective validity to Utilitarianism, and as Eliezer explained: (warning! the following sentence is almost certainly a misinterpretation!) You can’t explain Utilitarianism to a rock, therefore Utilitarianism is not objectively valid.
Or, more accurately, our belief in utilitarianism is a fact about ourselves, not a fact about the universe. Well, indirectly it’s a fact about the universe, because these beliefs were generated by a process that involves observing the universe. We observe that pleasure really does feel good, and that pain really does feel bad, and therefore we want to maximize pleasure and minimize pain. But not everyone agrees with us. Eliezer himself doesn’t even agree with us anymore, even though some of his previous writing implied that he did before. (I still can’t get over the idea that he would consider it a good idea to kill a whole planet just to PREVENT an alien species from removing the human ability to feel pain, and a few other minor aesthetic preferences. Yeah, I’m so totally over any desire to treat Eliezer as an Ultimate Source of Wisdom...)
Anyway, CEV is supposed to somehow take all of these details into account, and somehow generate an outcome that everyone will be satisfied with. I still don’t see how this could be possible, but maybe that’s just a result of my own ignorance. And then there’s the extreme difficulty of actually implementing CEV...
And no, I still don’t claim to have a better plan. And I’m not at all comfortable with advocating the creation of a purely Utilitarian AI.
Your plan of trying to spead good memes before the CEV extrapolates everyone’s volition really does feel like a good idea, but I still suspect that if it really is such a good idea, then it should somehow be a part of the CEV extrapolation. I suspect that if you can’t incorporate this process into CEV somehow, then any other possible strategy must involve cheating somehow.
Oh, I had another conversation recently on the topic of whether it’s possible to convince a rational agent to change its core values through rational discusson alone. I may be misinterpreting this, but I think the conversation was inconclusive. The other person believed that… er, wait, I think we actually agreed on the conclusion, but didn’t notice at the time. The conclusion was that if an agent’s core values are inconsistent, then rational discussion can cause the agent to resolve this inconsistency. But if two agents have different core values, and neither agent has internally inconsistent core values, then neither agent can convince the other, without cheating. There’s also the option of trading utilons with the other agent, but that’s not the same as changing the other agent’s values.
Anyway, I would hope that anyone who disagrees with utilitarianism, only disagrees because of an inconsistency in their value system, and that resolving this inconsistency would leave them with utilitarianism as their value system. But I’m estimating the probability that this is the case at… significantly less than 50%. Not because I have any specific evidence about this, but as a result of applying the Pessimistic Prior. (Is that a standard term?)
Anyway, if this is the case, then the CEV algorithm will end up resulting in the outcome that you wanted. Specifically, an end to all suffering, and some form of utilitronium shockwave.
Oh, and I should point out that the utilitronium shockwave doesn’t actually require the murder of everyone now living. Surely even us hardcore utilitarians should be able to afford to leave one planet’s worth of computronium for the people now living. Or one solar system’s worth. Or one galaxy’s worth. It’s a big universe, after all.
Oh, and if it turns out that some people’s value systems would make them terribly unsatisfied to live without the ability to feel pain, or with any of the other brain modifications that a utilitarian might recommend… then maybe we could even afford to leave their brains unmodified. Just so long as they don’t force any other minds to experience pain. Though the ethics of who is allowed to create new minds, and what sorts of new minds they’re allowed to create… is kinda complicated and controversial.
Actually, the above paragraph assumed that everyone now living would want to upload their minds into computronium. That assumption was way too optimistic. A significant percentage of the world’s population is likely to want to remain in a physical body. This would require us to leave this planet mostly intact. Yes, it would be a terribly inefficient use of matter, from a utilitarian perspective, but it’s a big universe. We can afford to leave this planet to the people who want to remain in a physical body. We can even afford to give them a few other planets too, if they really want. It’s a big universe, plenty of room for everyone. Just so long as they don’t force any other mind to suffer.
Oh, and maybe there should also be rules against creating a mind that’s forced to be wireheaded. There will be some complex and controversial issues involved in the design of the optimally efficient form of utilitronium that doesn’t involve any ethical violations. One strategy that might work is a cross between the utilitronium scenario and the Solipsist Nation scenario. That is, anyone who wants to retreat entirely into solipsism, let them do their own experiments with what experiences generate the most utility. There’s no need to fill the whole universe with boring, uniform bricks of utilitronium that contain minds that consist entirely of an extremely simple pleasure center, endlessly repeating the same optimally pleasurable experience. After all, what if you missed something when you originally designed the utilitronium that you were planning to fill the universe with? What if you were wrong about what sorts of experiences generate the most utility? You would need to allocate at least some resources to researching new forms of utilitronium, why not let actual people do the research? And why not let them do the research on their own minds?
I’ve been thinking about these concepts for a long time now. And this scenario is really fun for a solipsist utilitarian like me to fantasize about. These concepts have even found their way into my dreams. One of these dreams was even long, interesting, and detailed enough to make into a short story. Too bad I’m no good at writing. Actually, that story I just linked to is an example of this scenario going bad...
Anyway, these are just my thoughts on these topics. I have spent lots of time thinking about them, but I’m still not confident enough about this scenario to advocate it too seriously.
Your comments are tending to be a bit too long.
Thanks for the feedback. I kinda suspected that my comments were too long.
So, um… what would you prefer for me to do instead?
split them into multiple comments?
post them somewhere else (the Transhumanist Wiki?) and link to them from here?
refrain from posting the long comments entirely?
find some way to cut them down?
stick to a single topic per comment, and create multiple comments if I want to discuss multiple topics?
wait longer between posting these comments?
something else I haven’t thought of?
Yes, to various extents. (I should have been more helpful in the grandparent comment.)
I think the main problem is you seem to have a “stream of consciousness” style of writing. If you add an additional step of editing after (I’m just assuming you’re not doing much of this now), then you can figure out which points are most important to make and put them succinctly.
The advantage of this, from a utilitarian point of view, is that you can spend less time editing than it will take any particular person to otherwise figure out what you’re trying to say, and thus cause a net benefit to lots of people.
(ETA: note that the great-grandparent comment seems less subject to this particular criticism than some others)
Thanks again for the feedback.
As I was writing the following points, I noticed that I was just making excuses. But instead of deleting them, I left them in, but commented on them, because they felt important and relevant.
I was already aware of the utilitarian argument that it’s worth 1 minute of effort at rewriting in order to save 60 people one second each at reading, and I am making at least some attempt to do that. (correction: no, I didn’t actually do the math. I should at least try to do the math.)
I already spend lots of time reviewing my comments before I post them. I don’t post them until I scan through them once without noticing anything wrong. (correction: no, lately I’ve been posting them before I complete a full scan without finding any new issues, and I’ve been fixing some things by editing the comments after posting them. I should be more strict about following this rule. and as I mention below, I should add new issues to the list of things to scan for.)
Normally I have the opposite problem, spending way too much time reviewing what I wrote, which ends up resulting in other important things not getting said, because I’m spending too much time reviewing and never get around to writing the next thing. (correction: this will probably become less of an issue now that I’ve finished writing all of these “about me” comments.)
It usually feels like there’s a sense of urgency, that if I take too long to write a reply, then everyone will have moved on to other topics, and noone will end up reading my comment. (correction: sometimes there is a reason to post stuff asap, other times there isn’t. I need to learn how to tell the difference.)
But these are just excuses. If I’m going to continue posting comments, then I had better learn how to improve the quality of my comments.
The stream-of-consciousness style comments were something I wanted feedback on, and now I got the feedback, thanks. The feedback says that stream-of-consciousness-style comments are not acceptable. I’ll try to stop doing that.
And that means that in addition to the issues I’m already scanning for, I’ll also scan for… the specific reasons why stream-of-consciousness-style writing is annoying to read:
I need to present the points in the order that would make the most sense to the reader, not in just whatever order I happen to think of them in.
I need to erase points that I discover make no sense, rather than leaving them in just because it feels like there may be some reason to document the mistake.
I need to cut out off-topic side-comments entirely
I need to stop using phrases like “oh, by the way”
I need to cut out any meta-comments from inside my comments, unless for some reason they really are necessary
I especially need to cut out any comments about things like “my brain’s excuse-generator”. I need to remove the offending text, rather than explaining what caused me to write it. Unless it happens to be specifically on-topic, like in this comment.
probably some more things I haven’t thought of.
But so far that just answers what to do about the stream-of-consciousness-style writing. It doesn’t answer what to do about the excessive length of the comments. This comment is also really long, but I’m posting it anyway, because it feels necessary.
Actually, I should ask what everyone else does. Or maybe I should ask just what you, in particular do, Thom. Though this is already far off the original post’s topic...
This is probably too late, but I really love your writing style, especially your stream of consciousness.
The “excuse generator” points at something I suspect is a very fast and active part of a lot of people’s minds, but it’s probably worth a post or at least an extended open thread comment of its own.
As far as I can tell, I write so as to make things clear to the state of mind I was in just before I thought of something I’m trying to get across.
Thanks for the feedback, that last sentence sounds like a good idea, I’ll go ahead and try it.
There probably have already been lots posts about the “excuse generator”, though not specifically by that name. For example, Eliezer’s post Against Devil’s Advocacy Though that’s not quite the same thing.
And then there’s all the posts on rationalization.
Indeed. It may be rare among the LW community, but a number of people actually have a strong intuition that humans ought to preserve nature as it is, without interference, even if that means preserving suffering. As one example, Ned Hettinger wrote the following in his 1994 article, “Bambi Lovers versus Tree Huggers: A Critique of Rolston”s Environmental Ethics”: “Respecting nature means respecting the ways in which nature trades values, and such respect includes painful killings for the purpose of life support.”
Indeed. Like many others here, I subscribe to emotivism as well as utilitarianism.
Yes, that’s the ideal. But the planning fallacy tells us how much harder it is to make things work in practice than to imagine how they should work. Actually implementing CEV requires work, not magic, and that’s precisely why we’re having this conversation, as well as why SIAI’s research is so important. :)
I hope so. Of course, it’s not as though the only two possibilities are “CEV” or “extinction.” There are lots of third possibilities for how the power politics of the future will play out (indeed, CEV seems exceedingly quixotic by comparison with many other political “realist” scenarios I can imagine), and having a broader base of memetic support is an important component of succeeding in those political battles. More wild-animal supporters also means more people with economic and intellectual clout.
If you include paperclippers or suffering-maximizers in your definition of “anyone,” then I’d put the probability close to 0%. If “anyone” just includes humans, I’d still put it less than, say, 10^-3.
Yeah, although if we take the perspective that individuals are different people over time (a “person” is just an observer-moment, not the entire set of observer-moments of an organism), then any choice at one instant for pain in another instant amounts to “forcing someone” to feel pain....
That is inconsistent. Utilitarianism has to assume there’s a fact about the good; otherwise, what are you maximizing? Emotivism insists that there is not a fact about the good. For example, for an emotivist, “You should not have stolen the bread.” expresses the exact same factual content as “You stole the bread.” (On this view, presumably, indicating “mere disapproval” doesn’t count as factual information).
Sure. Then what I meant was that I’m an emotivist with a strong desire to see suffering reduced and pleasure increased in the manner that a utilitarian would advocate, and I feel a deep impulse to do what I can to help make that happen. I don’t think utilitarianism is “true” (I don’t know what that could possibly mean), but I want to see it carried out.
checking out the wikipedia article… hmm… I think I agree with emotivism too, to some degree. I already have a habit of saying “but that’s just my opinion”, and being uncertain enough about the validity (validity according to what?) of my preferences, to not dare to enforce them if other people disagree. And emotivism seems like a formalization of the “but that’s just my opinion”. That could be useful.
good point. and yeah, that’s that’s one of the main issues that’s causing me to doubt whether SIAI has any hope of achieving their mission.
good point. Have you had any contact with Metafire yet? He strongly agrees with you on this. Just recently he started posting to LW.
oh, and “quixotic”, that’s the word I was looking for, thanks :)
heh, yeah, that “significantly less than 50%” was actually meant as an extremely sarcastic understatement. I need to learn how to express stuff like this more clearly.
good point! This suggests the possibility of requiring people to go through regular mental health checkups after the Singularity. Preferably as unobtrusively as possible. Giving them a chance to release themselves from any restrictions they tried to place on their future selves. Though the question of what qualifies as “mentally healthy” is… complex and controversial.
When discussing utilitarianism it is important to indicate whether you’re talking about preference utilitarianism or hedonistic utilitarianism, especially in this context.
Right, sorry. I’m referring to total hedonic utilitarianism. Maximize the total amount of pleasure, minimize the total amount of pain, and don’t bother keeping track of which entity experiences the pleasure or pain.
A utilitronium shockwave scenario based on preference utilitarianism, and without any ethical restrictions, is something that even I would find very disturbing.
Indeed. While still a bit muddled on the matter, I lean toward hedonistic utilitarianism, at least in the sense that the only preferences I care about are preferences regarding one’s own emotions, rather than arbitrary external events.
You could also almost certainly convert a considerable percentage of the planet’s mass to computronium without impacting the planet’s ability to support life. A planet isn’t a very mass-efficient habitat, and I doubt many people would even notice if most of the core was removed, provided it was replaced with something structurally and electrodynamically equivalent.
You need the mass of the core to maintain the gravity. What sort of physics do you have in mind?
If computronium is of density equal to or greater than iron, physics wouldn’t need to be changed. Remove the core, replace it with a roughly spherical wad of perfected brain-matter, plus whatever structural supports are necessary to keep the crust in place, and Newton’s Shell Theorem says gravity would be the same. Add some electromagnets for the poles, and channel waste heat from the mechanisms inside to simulate volcanism where appropriate.
Even if computronium turns out to have lower density than iron, and for whatever reason it’s unacceptable to reduce surface gravity or transplant the luddites to an otherwise earthlike planet of correspondingly greater diameter, some of the core’s mass could be converted and the remainder compressed into a black hole. Again, shell theorem means there’s no difference from the outside.
good point, thanks for mentioning that.
heh, that’s actually what I meant by leaving the planet “mostly intact”, but I should have made that clearer.
Guess there’s a use for that-guy after all!
A couple of points:
I could not tell from your post if you understood that Pascal’s Wager is a flawed argument for believing in ANY belief system. You do understand this don’t you (That Pascal’s Wager is horribly flawed as an argument for believing in anything)?
Also, as Counsin it seems to be implying (And I would suspect as well), you seem to be exhibiting signs of the True Believer complex.
This is what I alluded to when I discussed friends of mine who would swing back and forth between Born-Again Christian and Satanists. Don’t make the same mistake with a belief in the Singularity. One needn’t have “Faith” in the Singularity as one would God in a religious setting, as there are clear and predictable signs that a Singularity is possible (highly possible), yet there exists NO SUCH EVIDENCE for any supernatural God figure.
Forming beliefs is about evidence, not about blindly following something due to a feel good that one gets from a belief.
In chapter five of Jaynes, “Queer Uses for Probability Theory,” he explains that although a claimed telepath tested 25.8 standard deviations away from chance guessing, that isn’t the probability we should assign to the hypothesis that she’s actually a telepath, because there are many simpler hypotheses that fit the data (for instance, various forms of cheating).
This example is instructive when using Pascal’s Wager to minimax expected utility. Pascal’s Wager is a losing bet for a Christian, because even though expecting positive infinity utility with infitesimal probability seems like a good bet, there are many likelier ways of getting negative infinity utility from that choice. Doing what you can to promote a friendly singularity can still be called “Pascal’s Wager” because it’s betting on a very good outcome with a low probability, but the low probability is so many orders of magnitude better than Christianity’s that it’s actually a rather good bet.
Obviously, you don’t want to let wishful thinking guide your epistemology, but I don’t think that’s what PI’s talking about.
Pascal’s wager is not such a horribly flawed argument. In fact, I wager we can’t even agree on why its flawed.
Later edit: I assume I am getting voted down for trolling (that is, disrupting the flow of conversation), and I agree with that. An argument about Pascal’s wager is not really relevant in this thread. However, especially in the context of being a ‘true believer’, it is interesting to me that statements are often made that something is ‘obvious’, when there are many difficult steps in the argument, or ‘horrible flawed’, when it’s actually just a little bit flawed or even controversially flawed. If anyone wants to comment in a thread dedicated to Pascal’s wager, we can move this to the open thread, which I hope ultimately makes this comment less trollish of me.
Partially seconded. (I think most people agree that the primary flaw is the symmetry argument, but I don’t think that argument does what they think it does, and I do see people holding up other, minority flaws. I do think the classic wager is horribly flawed for other, related but less commonly mentioned, reasons.)
I’ll write a top-level post about this today or tomorrow. (In the meantime, see Where Does Pascal’s Wager Fail? and Carl Shulman’s comments on The Pascal’s Wager Fallacy Fallacy.)
Thanks for the link to the Overcoming Bias post. I read that and it clarified some things for me. If I had known about that post, above I would have just linked to it when I wrote that the fallacy behind Pascal’s wager is probably actually unclear, minor or controversial.
There aren’t many difficult steps in refuting Pascal’s wager, and I dont’ think there’s be much disagreement on it here.
The refutation of PW, in short, is this: it infers high utility based on a very complex (and thus highly-penalized) hypothesis, when you can find equally complex (and equally well-supported) hypotheses that imply the opposite (or worse) utility.
(Btw, I was one of those who voted you down.)
Again, is it the argument that is wrong, or Pascal’s application of it?
(Can you confirm whether you down-voted me because it’s off-topic and inflammatory, or because I’m wrong?)
It is always wrong to give weight to hypotheses beyond that justified by the evidence and the length penality (and your prior, but Pascal attempts to show what you should do irrespective of prior). Pascal’s application is a special case of this error, and his reasoning about possible infinite utility is compounded by the fact that you can construct contradictory advice that is equally well-grounded.
I downvoted you not just for being wrong, but for having made such a bold statement about PW without (it seems) having read the material about it on LW. I also think that such over-reaching trivializes the contribution of writers on the topic and so comes off as inflammatory.
Are you saying, here, that it is wrong to factor in the utility of the hypothesis when giving weight to the hypothesis?
If he didn’t consider all the cases, his particular application of the argument was bad, not the argument itself, right?
I have read the material, but I disagreed with it, and it’s often not clear—especially when the posts are old—how I can jump in and chime in that I don’t agree. Often it’s just the subtext I disagree with, so I wait for someone to make it more explicit (or at least more immediate) and then I bring it up.
Thanks for your explanation about the down-voting.
No (assuming you mean the expected utility of the action given the hypothesis), just that you have to accurately weight its probability.
But his argument wouldn’t somehow be improved by considering all the cases (not that it would be practical to even consider all the hypotheses of lengths up to that which implies high utility from faith in God!). Considering those cases would find hypotheses that assign the opposite utility to faith, and worse, some would be more probable.
To salvage the argument, one would have to not just consider more cases, but provide a lot more epistemic labor—that is, make arguments that aren’t part of PW to begin with.
All of your objections to PW seem to be about Pascal’s application of the argument (the probabilities he inputted, the number of cases cases he considered) in which case we can agree that his conclusion wouldn’t be correct.
When I read that Pascal’s Wager is flawed as an argument, I interpret this as ‘the argument does not have good form’. Did people just mean, all along, that they disagreed with the conclusion of the argument because they didn’t agree with the numbers he used?
I think what they mean is, “If an argument allows you to claim an unreasonably huge amount of utility from actions not seemingly capable of that, then you have a complex enough hypothesis that you can find others with the same complexity and opposite conclusion”.
PW-type arguments, then, refer to the class of arguments in which someone tries to justify a course of action through (following the action suggested by) an improbable hypothesis by claiming high enough expected utility. That class of arguments has the flaw that when you allow yourself that much complexity, you necessarily permit hypotheses that advise just as strongly against the action.
That is not something that you can salvage by using different numbers here and there, and so the argument and similar ones have bad (and unsalvageable) form.
That is still fine, because we know how to handle the hypotheses with negative utility. You just optimize over the net utilities of each belief weighted by their probabilities.The fact that there are positive and negative terms together doesn’t invalidate the whole argument. You just do the calculation, if you can, and see what you get.
If you have the right numbers, and a simple enough case to do the computation, would you find PW an acceptable argument?
I’m still having trouble understanding your objection.
When you decide to have faith based on PW, you’re using some epistemology that allows you to pick out the “faith causes infinite utility” hypothesis out of the universe-generating functionspace, and deem it to have some finite probability. The problem is that that epistemology—whatever it is—also allows you to pick out numerous other hypotheses, in which some assert the opposite utility from faith (and their existence is provable by inversion of the faith = utility hypothesis elements).
In order to show net positive utility from believing, you would have to find some way of counting all hypotheses this complex, and finding out which comes ahead. However, the canonical PW argument relies on such anti-faith hypotheses not existing. You would be treading new ground in finding some efficient way to count up all such hypotheses and find which action comes out ahead—keeping in mind, of course, that at this level of complexity, there is a HUGE number of hypotheses to consider.
So you would be making a new argument, only loosely related to canonical PW. If you think you can pull this off, then go ahead and write the article, though I think you’ll soon find it’s not as easy as you expect.
And I would submit that any hypothesis that allows you to claim something has infinite utility (or necessarily more utility than the result of any other action) must itself be infinitely complex, thus infinitely improbable, canceling out the infinity claimed to come from faith.
As you know, I think the essence of Pascal’s wager is this:
I think there is enough to debate about in that statement alone.
But suppose that X = God exists. It seems to me that you are consistently writing that Pascal’s Wager fails because in this case the utility of X is impossible to compute due to the complexity of X. I don’t believe this makes the argument fail for two reasons:
Pascal’s Wager says, “If belief in X has positive utility, you should believe in X’. This argument doesn’t fail (in form) if the utility is negative or impossible to compute.
I disagree that the utility is impossible to compute, despite all your arguments about the complexity of X. My reason is straight-forward: atheists do calculate (or at least estimate) the utility of believing in God. Usually, they come up with a value that is negative. So it’s not impossible to estimate the average utility of a complex belief.
That’s not quite valid— there is some finite program that unfolds Permutation City-style into a universe that allows for infinite computational power, and thus (by some utility functions) infinite utility as the consequence of some actions. It would be wrong for a scientist living in such a universe to reject that hypothesis.
The reason I believe Pascal’s wager is flawed is that it is a false dichotomy. It looks at only one high utility impact, low probability scenario, while excluding others that cancel out its effect on expected utility.
Is there anyone who disagrees with this reason, but still believes it is flawed for a different reason?
This is an argument for why the argument doesn’t work for theism, it doesn’t mean the argument itself is flawed. If you would be willing to multiply the utility of each belief times the probability of each belief and proceed in choosing your belief in this way, then that is an acceptance of the general form of the argument.
If you assume that changing your belief is an available action (which is also questionable), then the idealized form is just expected utility maximization. The criticism is that Pascal incorrectly calculated the expected utility.
Right, one flaw in the idealized form is that it’s not clear that you can simply choose the belief that maximizes utility. But in some cases a person can, and does.
I think that an incorrect calculation, because one person considered 2 cases instead of N cases, is very different from being flawed as an argument.
PeerInfinity was writing about applying Pascal’s wager to atheism—so he must have been referring to the general form of the argument, not a particular application. Matthew B wrote that “Pascal’s Wager is a flawed argument for believing in ANY belief system”. Well, what about a belief system in which there are exactly two beliefs to choose from and the relative probabilities are (.4, .6) and the relative utilities of having the beliefs if they are true are (1000, 100) ? I would say the conclusion of the idealized form of Pascal’s wager is that you should pick the belief that maximizes utility, even though it is lower probability.
I would distinguish between the general form and the idealized general form. One way to generalize Pascal’s wager for belief B, is to compare the expected utilities of believing B and believing one contradictory Belief D in the conditions that B is true and that D is true. This is wrong no matter what belief B you apply it to.
Why would having the beliefs have utility? Isn’t utility a function of actions, as a rule?
There’s no contradiction in thinking “A is unlikely” and yet acting as if A is true—otherwise no-one would wear seat belts.
The utility of having a belief is what is being considered in Pascal’s wager, and is quite different from the utility of the belief itself.
The utility of a belief itself wouldn’t sway you to choose one belief over another. Suppose againyou have the two beliefs X and Y, and they each have a certain utility if they are true. If X is true, then you “get” that utility, independently of whether you believed it or not, by virtue of it being true. For example, if there is utility to God existing, then there is that benefit of him existing whether you believe in him or not.
In contrast, there is also utility for having a belief.
To complicate things, there is a component of the utility that is independent of whether the belief is true or not, and there is a component of the utility that depends on the belief being true. In the case of theism, there is a utility to being a theist (positive or negative, depending on who you ask) regardless of whether God exists, and there would also be an extra utility for believing in him if he does exist (possibly zero, if he doesn’t care whether you believe in him or not).
SilasBarta has pointed out a relevant argument regarding that case.
You mean the case of the argument applied to theism? I would be willing to forfeit the applicability of the argument for this case, since I’m just interested in discussing the validity of the general argument.
I don’t like discussing general cases when I don’t have some concrete examples. The only ones I can think of are boring cases of coercion involving unethical mindreaders.
Yes, I agree: the utility of having a belief only makes sense when for some reason you are rewarded for actually having the belief instead of acting as though you have the belief.
OK, since theism is unique in this aspect, in order to generalize away from the theistic, let’s use the utility for acting-as-though-you-believe instead of the utility for actually believing, because in most cases, these should be the same.
… but then, as soon as you do this, the argument become just about choosing actions based on average expected utility and there’s nothing controversial about it. So I guess PW might just suffer from lack of application: there are few cases where you are actually differentially rewarded for having a belief (instead of just acting as though you do), and these cases (generalizing from theism) involve hypotheses that are too complex to parametrize (Silas’ argument).
Back to the immediate object level: PeerInfinity wrote about applying Pascal’s Wager to atheism. However, atheism doesn’t make a utility distinction between having a belief and acting as though you do. Or does it? Having beliefs motivate actions and make them easier to compute.
When PeerInfinity said he chose to believe atheism because it seemed to maximize utility, he might have been summarizing together that acting as though atheism was true was deemed utility maximal, and believing in atheism then followed as utility maximal.
I also think Pascal’s Wager is not horribly flawed in the ways it’s most commonly claimed to be, and am aggrieved that this interesting and important discussion is taking place under a downvoted-to-invisibility comment on an unrelated post. I think I’ll write a top-level post about it today or tomorrow, but right now, I’d like to humbly ask that the above comment be upvoted until not invisible.
Taboo “Pascal’s wager”, please.
Sure.
Here’s an argument:
Suppose there is a dichotomy of beliefs, X and Y, their probabilities are Px and Py, and the utilities of having each belief is Ux and Uy. Then, the average utility of having belief X is PxUx and the utility of having belief Y is Py\Uy. You “should” choose having the belief (or set of beliefs) that maximizes average utility, because having beliefs are actions and you should choose actions that maximize utility.
What is the flaw in this argument?
For me, the flaw that you should identify is that you should choose beliefs that are most likely to be true, rather than those which maximize average utility. But this is a normative argument, rather than a logical flaw in the argument.
Normally, you should keep many competing beliefs with associated levels of belief in them. The mindset of choosing the action with estimated best expected utility doesn’t apply, as actions are mutually exclusive, while mutually contradictory beliefs can be maintained concurrently. Even when you consider which action to carry out, all promising candidates should be kept in mind until moment of execution.
This is complicated in the case of religious beliefs where the deity will judge you by your beliefs and not just your actions.
It is also complicated in the case of religious beliefs where other human beings will judge you by your beliefs, which is one reason why abandoning religions is so hard. But that is off-topic, particularly as you can just lie.
While we’re being off topic, I’m of the opinion that if you are someone who accepts you should one-box then you should also accept Pascal’s wager. I think both are wrong but most people here seem to accept one-boxing is correct but not accept Pascal’s wager. I don’t care enough about either to work the argument out in detail though.
Newcomb’s problem is just a case of making decisions when someone else, who “knows you very well” has already made a decision based on expectation of your decision. There are numerous real-world examples of this. Newcomb’s problem only differs in that it takes the limit of the “how well they know you” variable as it approaches “perfect”. There needn’t be an actual Omega, just a decision theory that is robust for all values of the variable up to and including perfect.
Which sounds a lot like Pascal’s wager to me, when your decision is whether to believe in god and god is the person who “knows you very well” and is deciding whether to let you into heaven based on whether you believe in him or not.
There are situations which I guess are what you would describe as ‘Newcomb-like’ where I would do the equivalent of one-boxing. If Omega shows up this evening though I will be taking both his boxes, because there is too big an epistemic gap for me to cross to reach the point of thinking that one-boxing is sensible in this universe.
But the plausibility of a hypothetical is unrelated to the correct resolution of the hypothetical. One could equally say that two-boxing implies that you should push the man off the bridge in the trolley problem—the latter is just as unphysical as Newcomb. The proper objection to unreasonable hypotheticals is to claim that they do not resemble the real-world situations one might compare them to in the relevant aspects.
I actually think that implausible hypotheticals are unhelpful and probably actively harmful which is why I usually don’t involve myself in discussions about Omega. I wish I’d stuck with that policy now.
Why do you think implausible hypotheticals are unhelpful and probaby harmful? It seems to me that they’re a lot of work for no obvious reward, but I don’t have a more complex theory.
Anyone have an example of the examination of an implausible hypothetical paying off?
I think implausible hypotheticals are often intuition pumps. If they are used as part of an attempt to convince the audience of a certain point of view I automatically get suspicious. If the point of view is correct, why can’t it be illustrated with a plausible hypothetical or a real world example? They often seem to be constructed in a way that tries to move attention away from certain aspects of the situation described and thus allow for dubious assumptions to be hidden in plain sight.
Basically, I always feel like someone is trying to pull a philosophical sleight of hand when they pull out an implausible hypothetical to make their case and they often seem to be used in arguments that are wrong in subtle or hard to detect ways. I feel like I encounter them far more in arguments for positions that I ultimately conclude are incorrect than as support for positions I ultimately conclude to be correct.
That’s interesting, and might apply to the trolley problem which implies that people can have much more knowledge of the alternatives than they are ever likely to have.
Ethical principles and empathy (as a sort of unconscious ethical principle) are needed when you don’t have detailed knowledge, but I haven’t seen the trolley problem extended to the usual case of not knowing very many of the effects.
It might be worth crossing the trolley problem with Protected from Myself.
Taking a look at ethical intuitions with specifics: Sex, Drugs, and AIDS: the desire to only help when it will make a big difference and the desire to not help unworthy people add up to worse effects than having a less dramatic view of the world. Having AIDS drugs doesn’t mean it makes sense to slack off on prevention as much as has happened.
Yes, the trolley problems are another example of harmful implausible hypotheticals in my opinion. The different reaction many people have to the same underlying ethical question framed as a trolley problem vs. an organ donor problem is I think illustrative of the pernicious influence of implausible hypotheticals on clear thought.
Well, the fact that they’re implausible pretty much means the cash rewards are going to have to wait until they are plausible. But don’t we think clear thinking is its own reward?
I’ve found that such things are incredibly crucial for getting people to think clearly about personal identity. In fact I don’t know if I have any way of explaining or defending my views on personal identity to the philosophically untrained without implausible hypotheticals. Same goes for understanding skepticism, causality, maybe induction, problems with causal decision theory (obviously), anthropics, simulation...
I’m all about being aware that using implausible hypotheticals can generate error but I am bewildered by the sudden resistance to them on this thread: we use them all the time here!
I would be dead chuffed to talk about the wisdom of considering implausible hypotheticals instead, if that’s what you’d prefer to do. (:
Edit: I would be equally happy to drop the thread entirely, if that’s what you prefer.
Ok, let me try and nail down my true objection here. Is Pascal’s wager a good reason to believe in God? No. Hypothetically, if you had good reason to believe that the hypothesis of the christian god existing were massively more likely than other hypotheses of similar complexity, would it be a good reason to believe in god? Well, not really—it doesn’t add much in that case.
Similarly, if Omega showed up at my apartment this evening would I one-box? No. Hypothetically, if I had good reason to believe that an Omega-like entity existed and did this kind of thing (which is the set up for Newcomb’s problem) would I one-box? Well, probably yes but you’ve glossed over the rather radical change to my epistemic state required to make me believe such an implausible thing.
I guess I have a general problem with a certain kind of philosophical thought experiment that tries to sneak in a truly colossal amount of implausibility in its premises and ask you not to notice and then whenever you keep pointing to the implausibility telling you to ignore it and focus on the real question. Well I’m sorry, but the staggering implausibility over there in the corner is more significant than the question you want me to focus on in my opinion… (Forgive the casual use of ‘you’ here—I’m not intending to refer to you specifically).
I don’t understand. A hypothetical can be dangerous if it keeps us from attending to aspects of the problem we’re trying to analyze- like the Chinese Room which fails to convey properly the powers it would have to have for us to declare it conscious. The fact that a hypothetical is implausible might make it harder for us to notice that we’re not attending to certain issues, I guess. That hardly seems grounds for rejecting them outright (indeed, Dennett uses plenty of intuition pumps). And the implausibility itself really is irrelevant. No one is claiming that the hypothetical will occur, so why should the probability of its occurrence be an issue?
Using Newcomb’s problem as an example, it seems like it glosses over important details of how much evidence you would actually need to believe in an Omega like entity and as a result confuses more than it illuminates. Re-reading some of Eliezer’s posts on it I get the impression that he is hinting that his resolution of the issue is connected to that problem. It seems to me that it causes a lot of unnecessary confusion because humans are susceptible to stories that require suspension of disbelief in highly implausible occurrences that they would not actually suspend their disbelief for if encountered in real life. This might be an example of Robin Hanson’s near/far distinction.
Tyler Cowen’s cautionary tale about the dangers of stories covers some of the same kinds of human biases that I think are triggered by implausible hypotheticals.
It certainly does gloss over that… I mean it has to, you’d require a lot of evidence. But the reason it does so is because the question isn’t could Omega exists or how can we tel when Omega shows up… the details are buried because they aren’t relevant. How does Newcomb’s problem confuse more that illuminate? It illustrates a problem/paradox. We would not be aware of that paradox were it not for the hypothetical. I suppose it confuses in the sense that one becomes aware of a problem they weren’t previously- but that’s the kind of confusion we want.
It’s a great video and I’m grateful you linked me to it but I don’t see where the problems with the kind of stories Cowen was discussing show up in thought experiments.
The danger is that you can use a hypothetical to illustrate a paradox that isn’t really a paradox, because its preconditions are impossible. A famous example: Suppose you’re driving a car at the speed of light, and you turn on the headlights. What do you see?
This is a danger. Good point.
It confuses because it doesn’t really show a problem/paradox. That is not obvious because of the peculiar construction of the hypothetical. If you actually had enough evidence to make it seem like one-boxing was the obvious choice then it wouldn’t seem like a paradoxical choice. The problem is people generally aren’t able to imagine themselves into such a scenario and so think they should two-box and then think there is a paradox (because you ‘should’ one-box). They quite reasonably aren’t able to imagine themselves into such a scenario because it is wildly implausible. The paradox is just an artifact of difficulties we have mentally dealing with highly implausible scenarios.
Specifically what I had in mind was the fact that people seem to have a natural willingness to suspend disbelief and accept contradictory or wildly implausible premises when ‘story mode’ is activated. We are used to listening to stories and we become less critical of logical inconsistencies and unlikely scenarios because they are a staple of stories. Presenting a thought experiment in the form of a story containing a highly implausible scenario takes advantage of a weakness in our mental defenses which exists for story-shaped language and leads to confusion and misjudgement which we would not exhibit if confronted with a real situation rather than a story.
No. The choice is paradoxical because no matter how much evidence you have of Omega’s omniscience the choice you make can’t change the amount of money in the box. As such traditional decision theory tells you to two- box because the decision you make can’t affect the amount of money the boxes. No matter how much money is in the boxes you should more by two boxing. Most educated people are causal decision makers by default. So a thought experiment where causal decision makers lose is paradox inducing. If one-boxing was the obvious choice people would feel the need to posit new decision theories as a result.
I disagree, and I think this is what Eliezer is hinting towards now I’ve gone back and re-read Newcomb’s Problem and Regret of Rationality. If you really have had sufficient evidence to believe that Omega is either an omniscient mind reader or some kind of acausal agent such that it makes sense to one-box then it makes sense to one-box. It only look like a paradox because you’re failing to imagine having that much evidence. Which incidentally is not really a problem—an inability to imagine highly implausible scenarios in detail is not generally an actual handicap in real world decision making.
I’m still going to two-box if Omega appears tomorrow though because there are very many more likely explanations for the series of events depicted in the story than the one you are supposed to take as given.
Curiously, what is the average utility you would estimate for belief in God? Or do you feel that trying to estimate this forces suspended disbelief in implausible scenarios?
Which god? The God Of Abraham, Isaac, And Jacob? The Christian, Muslim or Jewish flavour? It would seem this is quite important in the context of Pascal’s wager. Some gods are notoriously specific about the form my belief should take in order to win infinite utility. I don’t see any compelling evidence to prefer any of the more popular god hypotheses over any other, nor to prefer them over the infinitude of other possible gods that I could imagine.
Some of the Norse gods were pretty badass though, they might be fun to believe in.
… if I may put the question differently: what average utility do you estimate for not believing in any God?
This strikes me as a rather odd question. I thought we were more or less agreed that beliefs don’t generally have utility. The peculiarity of Pascal’s wager and religious belief in general is that you are postulating a universe in which you are rewarded for holding certain beliefs independently of your actions. In a universe with no god (which I claim is a universe much like our own) belief in god is merely false belief and generally false beliefs are likely to cause bad decisions and thus lead to sub-optimal outcomes.
If the belief in god is completely free-floating and has no implications for actions then it may not have any direct negative effect on expected utility. Presumably given the finite computational capacity of the human brain holding non-consequential false beliefs is a waste of resources and so has slight negative utility. It strikes me that this is not the kind of belief in god that people are usually trying to defend when invoking Pascal’s wager however.
I’m not sure that beliefs don’t generally have utility. It seems to me that beliefs (or something like beliefs) do a lot to organize action. There’s a difference between doing something because of short-term reward and punishment and doing the same thing because one thinks it’s generally a good idea.
Hmm. I think beliefs do have a utility, whether or not you can act on that utility by choosing a belief or whether or not you can accurately estimate the utility. If you believe something, you will act as though you believe it, so that believing in something inherits the utility of acting as though you do. It seems very strange to think of someone acting as though they believe something, without them actually believing it. There are exceptions, but for the most part, if someone bets on a belief, this is because they believe it.
I don’t in general agree with this. Outcomes have utility, actions have expected utility, beliefs are generally just what you use to try and determine the expected utility of actions. As a rule, true beliefs will allow you to make better estimates of the expected utility of actions.
This is true for ordinary beliefs: I believe it is raining so I expect the action of taking my umbrella to have higher utility than if I did not believe it was raining. It is possible to imagine certain kinds of beliefs that have utility in themselves but these are unusual kinds of beliefs and most beliefs are not of this type. If there is a god who will reward or punish you in the afterlife partly on the basis of whether you believed in him or not then ‘believing in god’ would result in an outcome with positive utility but deciding if you live in such a universe would be a different belief that you would need to come to from other kinds of evidence than Pascal’s wager.
It is possible to imagine other beliefs that could in theory have utility in themselves for humans. For example, it is possible that believing oneself a bit more attractive and more competent than is accurate might benefit ones happiness more than enough to compensate for lost utility due to less accurate beliefs leading to actions with sub-optimal expected utility. If this is true however it is a quirk of human psychology and not a property of the belief in the way that Pascal’s wager works.
I don’t find it at all strange to think of someone acting as if they believe in god even though they don’t. This has been common throughout history.
That looks like a good heuristic you are using—it seems related to the idea of the intuition pump.
...wow, that was a short time-to-agreement. :D
Yeah, I think I was always averse to this sort of philosophical sophistry but reading Consciousness Explained probably crystallized my objection to it at a relatively early age.
I think you’re mistaken, therefore I would like to see your proof. It would be a shame if I missed an opportunity to be more correct. ;)
They both have an element of privileging the hypothesis. If I had some reason to think I lived in a universe with an Omega/God then I might agree I should one-box/believe in god but since I don’t have any reason to think I live in such a universe why am I wasting my time even considering this particular implausible scenario?
I see what you mean, but there exists one of two problems with the symmetry.
First, the most annoying form of Pascal’s Wager is the epistemological version: “Believing that God exists has positive expected utility, so you should do so”. This argument fails logically, for reasons SilasBarta listed, and it is usually this form being refuted when people say, “Pascal’s Wager fails”.
Second, the form of Pascal’s Wager concerning worship, “Believing in God, who is known to exist, has positive utility”, has moral complexities which are absent from Newcomb’s dilemma. Objections in this case usually arise from the normative argument that you should not believe things which are false.
I disagree that it fails logically. The argument, written modus ponens, is:
“If believing in God has positive expected utility, then you should do so”.
If you don’t believe that believing in God has positive expected utility, then this is not a disagreement in the logic of Pascal’s Wager. Pascal’s Wager would equally say, “If believing in God has negative expected utility, then you should not do so”.
Okay, now I think I’m starting to see the miscommunication: PW does not simply say what you’ve quoted there. It’s typically associated with an argument about how the possibility of infinite utility from believing (and perhaps infinite disutility from not believing) outweights the small probability of it being true, and the utility of other courses of action, on account of its infinite size.
You’re taking “Pascal’s Wager” to refer only to certain premises the argument uses, not the full argument itself.
It occurred to me that you might not agree that my distillation of PW contained all the salient features. (For example, there are no infinitesimals and no infinities written in). However, I think it must have been my more general argument that PeerInfinity was referring to, because he was applying it to atheism.
Good point, I edited my form of the argument to include ‘sets of beliefs’. If having a set of beliefs maximizes your utility, then having the set is what you “should” do, I think, in the spirit of the argument.
Accepting God as a probable hypothesis has a lot of epistemic implications. This is not just one thing, everything is connected, one thing being true implies other things being true, other things being false. You won’t be seeing the world as you currently believe it to be, after accepting such change, you will be seeing a strange magical version of it, a version you are certain doesn’t correspond to reality. Mutilating your mind like this has enormous destructive consequences on your ability to understand the real world, and hence on ability to make the right choices, even if you forget about the hideousness of doing this to yourself. This is the part that is usually overlooked in Pascal’s wager.
(Belief in belief keeps the human believers out of most of the trouble, but that’s not what Pascal’s wager advocates! Not understanding this distinction may lead to underestimating the horror of the suggestion.)
Thank you. My response appears in another thread.
Your story and perspective are very interesting. You don’t need to self-censor.
Thanks. Actually, the reason why I said “I guess I had better stop writing now” is because this comment was already getting too long.
Just a note—don’t take Jack’s advice to not self-censor too literally. There is much weirdness in you, and even the borders of this place would groan under its weight.
Not that there’s anything wrong with that.
The above (below? Depends on your settings, I guess) comment, which is now hidden, involves a poll, and would not (I predict) have otherwise become hidden.
It’s also hidden depending on your settings: you can change the threshold for hiding comments as well. I don’t hide any comments, because seeing a hidden comment makes me so curious I have to click it, and just draws more attention to it for me.
hehe, thanks, yeah, I am well aware that I have lots of weirdness, and that letting all of it out freely is usually not a good idea.
Though as a random experiment...
please downvote this comment if you think that randomly saying stuff like “I like penises” is inappropriate for LW.
ok, thought so.
thanks.
please upvote this comment if you want to balance out the effects of this experiment on my karma.
lol, it was sooo tempting to edit that comment and replace the word “penises” with “ice cream”. I guess that would be the reverse of one of the standard internet pranks.
But I didn’t do that, because the negative value from the confusion caused to anyone else reading this thread would probably have outweighed any positive value from the prank being funny.
though, um… please upvote this comment if you want me to go ahead and swap the words “penises” and “ice cream”, in both the previous comment and this comment...
or please downvote this comment if you think that’s a bad idea.
I didn’t upvote or downvote any of these. But I think the result would be the same if you had said “ice cream”: the point is that it’s a completely random comment that has nothing to do with the rationality discussion and distracts from the flow. I don’t think that there’s anything wrong with randomness or silliness but interrupting a rationality conversation with completely unrelated comments could get annoying.
ok, that makes sense.
thanks for the feedback
hugs :)
oh, and, um… please downvote this comment if hugs or other public displays of affection are considered inappropriate for LW.
or please upvote this comment if public displays of affection are considered appropriate, if they’re tasteful and not too distracting.
Despite the negative status of this poll, a quick “Thanks hugs :)” is perfectly appropriate for LW.
hehe, so… 16 downvotes… is that a new record? :)
(I’m guessing a >50% probability that it’s not...)
If I consider this as an achievement to be proud of, does that make me a troll?
Doing polls for this kind of thing, while somewhat interesting in a meta sense (I definitely like to see discussion about the social norms here), is rather off-topic. It would be less disruptive if you were to start by sticking to the already established norms (which can be learned by observing, and which you can ask me or Alicorn or Blueberry about via IM if you have questions), and occasionally break from the norms to test how other habits of yours are received.
hmm, good point.
heh, but now what I naturally want to say is:
please upvote this comment if you want to see more polls like this
or please downvote this comment if you want to see less polls like this
though of course you’re welcome to ignore this comment, and not vote at all.
ow. my karma is taking a hit. I should have expected that. And I should have set up another karma-balancing comment. I guess you can use this comment for karma balancing. That means if you voted another comment down, vote this one up, and vice versa.
So far I lost about 10 karma as a result of all these polls. Hopefully that will help reduce the emotional impact of future losses of karma, which would help me get over the paranoia about my comments causing more harm than good. And yes, I do plan to occasionally post comments that I suspect will be downvoted. But not too often, I’m not quite that reckless.
If you are going to make polls like that, the Open Thread is probably a better place to do it. There they won’t distract from the main topic of conversation.
And I’m glad that you’re not scared to post or get downvoted! :)
I agree. But at least that first poll got enough downvotes to block off all the others, for anyone who didn’t disable the feature that auto-hides comments with less than −3 karma.
I haven’t yet seen an answer to Pascal’s Wager on LW that wasn’t just wishful thinking. In order to validly answer the Wager, you would also have to answer Eliezer’s Lifespan Dilemma, and no one has done that.
I’m pretty sure Peer meant the original version of Pascal’s Wager, the argument for Christianity, which has the obvious answer, “What if the Muslims are right? or “What if God punishes us for believing?”
That’s not an answer, because the probabilities of those things are not equal.
“God punishes us for believing” has a much lower probability, because no one believes it, while many people believe in Christianity.
“Muslims are right” could easily be more probable, but then there is a new Wager for becoming Muslim.
The probabilities simply do not balance perfectly. That is basically impossible.
Why does the probability have anything to do with the number of people who believe it?
There’s then the problem that the expected value involves adding multiples of positive infinity (if you choose the right religion) to multiples of negative infinity (if you choose the wrong one), which gives you an undefined result.
The probability of any kind of God existing is extremely low, and it’s not clear we have any information on what kind of God would exist conditioned on some God existing.
There’s also the problem that if you know the probability that God exists is very small, you can’t believe, you can only believe in belief, which may not be enough for the wager.
The probability has something to do with the number of people who believe it because it is possible that some of those people have a good reason to believe it, which automatically gives it some probability (even if very small.) But for positions that no one believes, this probability is lacking.
That adding positive and negative infinity is undefined may be true mathematically, but you have to decide one way or another. And it is wishful thinking to say that it is just as good to choose the less probable way as the more probable way. For example, there are two doors. One has a 99% chance of giving negative infinite utility, and a 1% chance of positive infinite. The second door has a 1% chance of negative infinite utility, and a 99% chance of positive infinite utility. Defined or not, it is perfectly obvious that you should choose the second door.
We do have information on what kind of God would exist if one existed: it would probably be one of the ones that are claimed to exist. Anyway, as Nick Bostrom points out, even without this kind of evidence, the probabilities still will not balance EXACTLY, since you will have some evidence even from your intuitions and so on.
It may be true that some people couldn’t make themselves believe in God, but only in belief, but that would be a problem with them, not with the argument.
This can’t be right. The number of people who follow any one religion is affected by how people were raised, by cultural and historical trends, by birth rates, and by the geographic and social isolation of the people involved. None of these things have anything to do with truth. Currently Christianity has twice as many people as any other religion because of historical and political facts; you think this makes it more likely than Islam to be true?
Suppose that in 50 years, because of predicted demographic trends, there are twice as many Muslims as Christians. You then seem to be in the strange position of thinking (a) Christianity is more likely to be true now, but (b) because of changing demographics, you will be likely to think Islam is more likely to be true in 50 years.
How do people’s claims give you that information? Religions are human cultural inventions. At most one could be true, which means the others have to be made up anyway. If a God did exist, why is it more likely that one of them is true than that they were all made up and humanity never came close to guessing the nature of the God that did exist?
My intuition tells me that if a God of some sort does exist, the probabilities end up favoring a God that rewards looking at the evidence and believing only what you have reason to be true, but that may just be my bias showing.
Intuition about what religion is true is likely to reflect your upbringing and your culture more than the actual truth. Given that there’s currently no evidence of any kind of God or afterlife, I can’t see how there is any evidence that God X is more likely to exist than God Y.
It’s also worth noticing that Pascal’s Wager uses a spherical cow version of religion. Some religious traditions might require actual belief for infinite utility, others just belief in belief, others just certain behavior or words independent of belief.
I’ll answer this later. For now I’ll just point out that you aren’t addressing my position at all, but other things which I never said. For example, I said that if people believe something, this increases its probability. You respond by asking things like “Currently Christianity has twice as many people… you think this makes it more likely than Islam to be true?” I definitely did not say that the probability of a religion is proportional to the number of people who believe it, just that religions that some people believe are more likely than ones that no one believes.
Right; or if you don’t decide exactly, at least you have to do (believe or not believe) one or the other.
I would say that the model breaks down. Mathematics (or at least the particular mathematical model being used) is not capable of describing this situation, but that doesn’t make the situation itself meaningless. (That would be a version of the map/territory fallacy.)
Here I disagree with you. I would say that you have not given enough information. It is as if you gave the same problem statement but with the word ‘infinite’ removed (so that we only know whether the utilities are positive or negative). It may seem as if you have given all of the information: the probabilities and the utilities. But the mathematics which we use to calculate everything else out of those values breaks down, so in fact you have not given all of the information.
One important missing piece of information is the ratio of the first positive utility to the second. That and two other independent ratios would be enough information, if they’re all finite. (If not, then we might need more information.)
And don’t tell me that these ratios are undefined; the mathematical model that calculates the ratios from the information given breaks down, that’s all. In fact, there is an atlernative mathematical model of decision which deals only in ratios between utilities; if you’d followed that model from the beginning, then you would never have tried to state the actual utilities themselves at all. (For mathematicians: instead of trying to plot these 4 utilities in a 4-dimensional affine space, plot them in a 3-dimensional projective space.)
Right; the proper conclusion of the argument is not to believe, but to try to believe. And if you buy the argument, then you should try very hard!
I agree with everything you’ve said here, including that in the two door situation the decision could go the other way if you had more information about the ratio of the utilities. Still, it seems to me that what I said is right in this way: if you are given no other information except as stated, you should choose the second door, because your best estimate of the ratios in question will be 1-1. But if you have some other evidence regarding the ratios, or if they are otherwise specified in the problem, your argument is correct.
Can you please remind me what the question is, that you’re looking for an answer to?
And can you please provide a link to an explanation of what Eliezer’s Lifespan Dilemma is?
http://lesswrong.com/lw/17h/the_lifespan_dilemma/
If you read the article and the comments, you will see that no one really gave an answer.
As far as I can see, it absolutely requires either a bounded utility function (which Eliezer would consider scope insensitivity), or it requires accepting an indefinitely small probability of something extremely good (e.g. Pascal’s Wager).
If you believe that there is something with arbitrarily high utility, then by definition, you will accept an indefinitely small probability of it.
Assume my life has a utility of 10 right now. My preferences are such that there is absolutely nothing I would take a 99% chance of dying for. Then, by definition, there’s nothing with a utility of 1000 or more. The problem comes from assuming that there is such a thing when there isn’t. I don’t see how this is scope insensitivity; it’s just how my preferences are.
Someone who really had an unbounded utility function would really take as many steps down the Lifespan Dilemma path as Omega allowed. That’s really what they’d prefer. Most of us just don’t have a utility function like that.
So you wouldn’t die to save the world? Or do you mean hypothetically if you had those preferences?
I agree with the basic argument, it is the same thing I said. But Eliezer at least does not, since he has asserted a number of times that his utility function is unbounded, and that it allows for arbitrarily high utilities.
If the world is doomed immediately unless I die for it, I have a 100% chance of dying immediately, so I might as well die to save the world. But if it’s a choice between living another 50 years and then the world ending, or dying right now and saving the world, and no one would know, I wouldn’t die to save the world. I’m too selfish for that.
Then he should keep taking Omega’s offers, and any discomfort he has with that is faulty intuition, like the discomfort from choosing TORTURE over SPECKS.
I would die right now to prevent the world from ending 50 years from now. It’s actually even hard for me to imagine that you’re actually as selfish as you say. If the situation actually came up you might find out differently. But I guess it’s possible.
You might be right that Eliezer should simply accept the Lifespan dilemma as the necessary consequence of his utility function (at least as he defines it.)
Really? Why? I can’t imagine myself dying to save the world; it’s completely implausible to me and I have a hard time understanding what it would feel like to be willing to do so. But people often die for much less.
Are you married? If so, would you die to save your wife’s life?
Or if you’re not married, what about your mother?
Do you find it hard to imagine those things too?
It’s simple. The ‘selfish’ terminology is just obscuring matters. Just keep your feelings about one thing (your life) and substitute it with something else (someones life).
Unknowns utility functions is of a type that it assigns infinitely high utility to saving the world. Not saving the world is simply no option. That’s what Unknowns wants.
Edit: Forget what I said about Unknowns previously.
Blueberry was the one who introduced the “selfish” terminology. He said, “I wouldn’t die to save the world. I’m too selfish for that.”
I’m really sorry. I confused you with someone else I talked to yesterday. My mistake, I edited my comment and will keep more care in future.
Thank you.
The standard answer is “But what if the Muslims are right?” You can’t be both a Christian and a Muslim, and you lose by guessing wrong. We have no more reason to believe we’ll be rewarded for believing in God X than we have to believe we’ll be punished for believing in God X, as we would be if God Y were the correct one.
All this does is show that the dilemma must have a flaw somewhere, but it doesn’t explicitly show that flaw. The same problem occurs with finding the flaws in proposed perpeptual motion machines, you know there must be a flaw somewhere, but it’s often tricky to find it.
I think the flaw in Pascal’s wager is allowing “Heaven” to have infinite utility. Unbounded utilities, fine; infinite utilities, no.
See The Pascal’s Wager Fallacy Fallacy.
Betting on infinity.
That’s a great video.
Elliezer in that article:
“The original problem with Pascal’s Wager is not that the purported payoff is large. This is not where the flaw in the reasoning comes from. That is not the problematic step. The problem with Pascal’s original Wager is that the probability is exponentially tiny (in the complexity of the Christian God) and that equally large tiny probabilities offer opposite payoffs for the same action (the Muslim God will damn you for believing in the Christian God). ”
This is just wishful thinking, as I said in another reply. The probabilities do not balance.
What about “living forever”? According to Eliezer, this has infinite utility. I agree that if you assign it a finite utility, then the lifespan dilemma fails (at some point), and similarly, if you assign “heaven” a finite utility, then Pascal’s Wager will fail, if you make the utility of heaven low enough.
[comment deleted]
Can you write a post about satanism? I’d love to know whether there are any actual satanists, and what they believe/do.
I used to know one, and have done a bit of reading about it. It struck me as a reversed-stupidity version of Christianity, though there were a few interesting memes in the literature.
Depending upon the Type of Satanist, yes, they are often just people looking for a high “Boo-Factor” (A term made-up by many of the early followers of a musical Genre called “Deathrock” (it’s more public name is now Goth, although that is like comparing a chain saw to a kitchen pealing knife—the “Goths” are the kitchen knife).
Many Satanists, especially those who hadn’t really read much of the published Satanic literature would just make something up themselves and it was almost always based in Christian motifs and archetypes. The two institutions who have publicly claimed the title of “Satanist” (The Church of Satan and The Temple of Set) both reject any and all of Christian Theology, Motifs, Archetypes, Symbolism and Characters as being ingenuous and twisted archetypes of older more healthy god archetypes (If you read Jung and Joseph Campbell, this is not uncommon for a rising religious paradigm to hijack an older competing paradigm as its bad-guys)
As Phil has suggested, maybe a front page post will come in handy. It should be recognized that some Satanists happen to be very rational people. They are just using the symbolism to manipulate their environment (although most of the more mature ones have found more mature symbols with which to manipulate the environment and their peers and subordinates).
The types to which I was referring in my post were the Christian Satanists (people who are worshiping the Christian version of Satan), which is just as bad as worshiping the Christian God. Both the Christian God and the Christian Satan are required for that mythology to be complete.
Wow! We make worshipping the devil sound bad around here by comparing him to God! Excuse me if I take a hint of pleasure at the irony. ;)
Well, they both (according to Christian Myth) are truly bad characters.
It is unfortunate for God that Satan (Lucifer) had such a reasonable request “Gee, Jehovah, It would certainly be nice if you let us try out that chair every once in a while.” Basically, Lucifer’s crime was one that is only a crime in a state where the King is seen as having divine authority to rule, and all else is seen as beneath such things (thus reflecting the Divine Order)
It was this act upon which Modern Satanists seized to create a new mythology for Satanism, where it was reason rebelling again an order that was corrupt and tyrannical.
To be fair this stuff isn’t Christian mythology in the way that Adam and Eve, or Loaves and Fishes is Christian mythology. It’s just religious fiction.
...
Unless someone has declared John Milton a prophet and possessor of divine revelation. Which would be hilarious.
It isn’t stuff that made it into the modern canon, but in the Early Christian Church, Myth of this type appeared all over the place from the Jewish Sources, in an attempt to integrate it into various Christian Sects.
Isn’t it ALL just religious fiction?
The key word in Jack’s sentence was “just”. The concept of ‘canonical’ is important, to religious believers and Star Trec fans alike.
The Christian Myth includes a quite specific definition of bad so according to the Christian myth only one of them is bad. Is what you mean that according to you the characters as described in the Christian Myth were both truly bad?
That description loses something when the ruler is, in fact, God. One of the bad things about claiming that the king is king because God says so is that it is not the case that any god said any such thing. When the ruler is God then yes, God does say so. The objection that remains is “Who gives a @$@# what God says?” I agree with what (I think) you are saying about the implications of claims of authority but don’t like the loaded language. It confuses the issue and well, I would say that technically (that counterfactual) God does have the divine authority to rule. It’s just that divine authority doesn’t count for squat in my book.
There are Christian Satanists? Correct me if I’m wrong, but I thought Satanism was a religion founded around Rand-like rational selfishness, and explicitly denied any supernatural entities.
Yes, they are “Christian” in the sense that all of the mythology and practices for their worship of Satan are derived from Christianity, and they still believe in a Christian God.
It is just that these people believe that they are defying and opposing the Christian God (Fighting for the other team). They still believe in this God, just no longer have it as the object of their worship and devotion.
This is also the more traditional form of Satanist in our society, and one which the more modern Satanist tends to oppose. The Modern Satanist is a self-worshiping atheist, and as has been pointed out, tend to place everything in the context of self-interest. It is a highly utilitarian philosophy, but often marred in actual practice by ignorant fools who don’t seem to understand the difference between just acting like a selfish dick and acting out of self-interest (doing things which improve one’s condition in life, not things which worsen one’s condition)
There’s an Ayn Rand quote I don’t have handy to the effect that if the virtues needed for life are considered evil, people are apt to embrace actual evils in response.
Nope, worshipping the devil is right up there as far as meanings for ‘Satanism’ go.
You mean, like a main page post? I’d love to.
You would be surprised about how rational the real Satanists (and their various offshoots and schisms) are (as the non-Christian based Satanist is an athiest).
In fact, the very first Schism of the Church of Satan gave birth to the Temple of Set (Founded by the then head of the Army’s Psychological Warfare Division), which was described as a “Hyper-Rational Belief System” (Although in reality it still had some rather unfortunately insane beliefs among its constituents). The Founder was very rational though. He even had quite a bit of science behind his position… It’s just that his job caused him to be a rather creepy and scary guy.
Has today’s Satanism retained any connections to Alistair Crowley?
At least they’re maintaining lightness.